We have learned how to find the strong components for a directed graph in the class. Now, you are asked to implement the algorithm we discussed in the class. Given the adjacency lists of a di-graph, your function should print the affiliation of each node to the root of traversal tree it belongs to. For example, given the adjacency lists of the example below as [[3],[0],[0,3,4],[1],[2]]. We get two traversal trees after the second run of DFS: T1 = {(0,3), (3,1)}, T2 = {(2,4)}. Then, the nodes 0,1,3 all belong the tree rooted at node 0, and nodes 2, 4 belong to the tree rooted at node 2. You need to print an array [0, 0, 2, 0, 2]. You can use any python built-in package to complete this exercise. For example: Test Result find_scc([[3],[0],[0,3,4],[1],[2]]) [0, 0, 2, 0, 2]

时间: 2024-02-02 14:03:45 浏览: 23
Sure, I understand your question. Here's a Python function `find_scc` that takes an adjacency list of a directed graph and prints the affiliation of each node to the root of the traversal tree it belongs to: ```python def find_scc(adj_list): n = len(adj_list) visited = [False] * n low = [0] * n disc = [0] * n parent = [-1] * n scc = [-1] * n time = 0 stack = [] def dfs(u): nonlocal time visited[u] = True low[u] = disc[u] = time time += 1 stack.append(u) for v in adj_list[u]: if not visited[v]: parent[v] = u dfs(v) low[u] = min(low[u], low[v]) elif parent[u] != v: low[u] = min(low[u], disc[v]) if low[u] == disc[u]: while True: v = stack.pop() scc[v] = u if v == u: break for i in range(n): if not visited[i]: dfs(i) return scc ``` The function first initializes some arrays to keep track of the visited nodes, the discovery time (`disc`) and low time (`low`) of each node, the parent of each node, the strongly connected component (`scc`) of each node, and a stack for the DFS. The function then defines a nested function `dfs` to perform the DFS on the graph. The DFS starts at a node `u`, sets its discovery and low times to the current time, and adds it to the stack. The function then recursively visits its neighbors, updates their low times, and checks if there is a back edge to an ancestor of `u`. If there is, the low time of `u` is updated accordingly. If the low time of `u` equals its discovery time, then `u` is the root of a strongly connected component, and all the nodes popped from the stack until `u` is reached belong to the same SCC. The function then returns the `scc` array. To solve the problem, we can simply call the `find_scc` function on the input adjacency list, and then find the root of the traversal tree for each node by finding the SCC root of its SCC. Here's an example implementation of this approach: ```python def find_root(adj_list): scc = find_scc(adj_list) roots = [] for i in range(len(adj_list)): root = i while scc[root] != scc[i]: root = scc[root] roots.append(root) return roots adj_list = [[3],[0],[0,3,4],[1],[2]] print(find_root(adj_list)) # Output: [0, 0, 2, 0, 2] ``` The `find_root` function first calls `find_scc` on the input adjacency list to get the SCCs of each node. It then iterates over all nodes and finds the root of the SCC that the node belongs to, by following the SCC roots until it reaches a node whose SCC root is the same as the root of the SCC of the original node. Finally, it returns an array of all the found roots. When we run this code with the input example, it should output `[0, 0, 2, 0, 2]`, as expected.

相关推荐

这一段讲的是什么:Abstract—A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker’s chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model—malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input—a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks. Index Terms—Trojan attack, Backdoor attack

3.4 Pair Interaction Feature The interaction pattern between two individuals is encoded by a spatial descriptor with view invariant relative pose encoding. Given the 3D locations of two individual detec- tions zi,zj and two pose features pi,pj, we represent the pairwise relationship using view normalization, pose co-occurrence encoding, semantic compression and a spatial histogram (see Fig. 5 for illustration). The view normalization is performed by rotating the two people in 3D space by θ with respect to their midpoint, making their connecting line perpendicular to the cam- era view point. In this step, the pose features are also shifted accordingly (e.g. if θ = 45‘, shift 1 dimension with a cycle). Then, the co-occurrence feature is obtained by building a 2-dimensional matrix in which each element (r, c) corresponds to min(pi(r), pj (c)). Although the feature is view invariant, there are still elements in the matrix that deliver the same semantic concepts (e.g. left-left and right-right). To reduce such unnecessary variance and obtain a compact representation, we perform another transformation by multiplying a semantic compression matrix Sc to the vector form of the co-occurrence feature. The matrix Sc is learned offline by enumerating all possible configurations of view points and grouping the pairs that are equivalent when rotated by 180 degrees. Finally, we obtain the pair interaction descriptor by building a spatial histogram based on the 3D distance between the two (bin centers at 0.2, 0.6, 2.0 and 6.5 m). Here, we use linear interpolation similarly to contextual feature in Sec. 3.3. Given the interac- tion descriptor for each pair, we represent the interaction feature φxx(xi,xj) using the confidence value from an SVM classifier trained on a dictionary of interaction labels Y.什么意思

import numpy as np def sigmoid(x): # the sigmoid function return 1/(1+np.exp(-x)) class LogisticReg(object): def __init__(self, indim=1): # initialize the parameters with all zeros # w: shape of [d+1, 1] self.w = np.zeros((indim + 1, 1)) def set_param(self, weights, bias): # helper function to set the parameters # NOTE: you need to implement this to pass the autograde. # weights: vector of shape [d, ] # bias: scaler def get_param(self): # helper function to return the parameters # NOTE: you need to implement this to pass the autograde. # returns: # weights: vector of shape [d, ] # bias: scaler def compute_loss(self, X, t): # compute the loss # X: feature matrix of shape [N, d] # t: input label of shape [N, ] # NOTE: return the average of the log-likelihood, NOT the sum. # extend the input matrix # compute the loss and return the loss X_ext = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1) # compute the log-likelihood def compute_grad(self, X, t): # X: feature matrix of shape [N, d] # grad: shape of [d, 1] # NOTE: return the average gradient, NOT the sum. def update(self, grad, lr=0.001): # update the weights # by the gradient descent rule def fit(self, X, t, lr=0.001, max_iters=1000, eps=1e-7): # implement the .fit() using the gradient descent method. # args: # X: input feature matrix of shape [N, d] # t: input label of shape [N, ] # lr: learning rate # max_iters: maximum number of iterations # eps: tolerance of the loss difference # TO NOTE: # extend the input features before fitting to it. # return the weight matrix of shape [indim+1, 1] def predict_prob(self, X): # implement the .predict_prob() using the parameters learned by .fit() # X: input feature matrix of shape [N, d] # NOTE: make sure you extend the feature matrix first, # the same way as what you did in .fit() method. # returns the prediction (likelihood) of shape [N, ] def predict(self, X, threshold=0.5): # implement the .predict() using the .predict_prob() method # X: input feature matrix of shape [N, d] # returns the prediction of shape [N, ], where each element is -1 or 1. # if the probability p>threshold, we determine t=1, otherwise t=-1

最新推荐

recommend-type

The Scientist and Engineer Guide to Digital Signal Processing

The Scientist and Engineer Guide to Digital Signal Processing——Second Edition by Steven W. Smith This book was written for scientists and engineers in a wide variety of fields: physics, ...
recommend-type

软件测试经验与教训(Lessons Learned in Software Testing)Word英文版

软件测试经验与教训(Lessons Learned in Software Testing)Word英文版 英文电子版本,非扫描版本!非中文的非法翻译版!! 如果你实在没分就发邮件给我: rainsuns#gmail
recommend-type

电信塔施工方案.doc

5G通信行业、网络优化、通信工程建设资料。
recommend-type

29-【智慧城市与政府治理分会场】10亿大数据助推都市治理-30页.pdf

29-【智慧城市与政府治理分会场】10亿大数据助推都市治理-30页.pdf
recommend-type

ABB IRC5 Compact 机器人产品手册

ABB IRC5 Compact 机器人产品手册
recommend-type

RTL8188FU-Linux-v5.7.4.2-36687.20200602.tar(20765).gz

REALTEK 8188FTV 8188eus 8188etv linux驱动程序稳定版本, 支持AP,STA 以及AP+STA 共存模式。 稳定支持linux4.0以上内核。
recommend-type

管理建模和仿真的文件

管理Boualem Benatallah引用此版本:布阿利姆·贝纳塔拉。管理建模和仿真。约瑟夫-傅立叶大学-格勒诺布尔第一大学,1996年。法语。NNT:电话:00345357HAL ID:电话:00345357https://theses.hal.science/tel-003453572008年12月9日提交HAL是一个多学科的开放存取档案馆,用于存放和传播科学研究论文,无论它们是否被公开。论文可以来自法国或国外的教学和研究机构,也可以来自公共或私人研究中心。L’archive ouverte pluridisciplinaire
recommend-type

:YOLOv1目标检测算法:实时目标检测的先驱,开启计算机视觉新篇章

![:YOLOv1目标检测算法:实时目标检测的先驱,开启计算机视觉新篇章](https://img-blog.csdnimg.cn/img_convert/69b98e1a619b1bb3c59cf98f4e397cd2.png) # 1. 目标检测算法概述 目标检测算法是一种计算机视觉技术,用于识别和定位图像或视频中的对象。它在各种应用中至关重要,例如自动驾驶、视频监控和医疗诊断。 目标检测算法通常分为两类:两阶段算法和单阶段算法。两阶段算法,如 R-CNN 和 Fast R-CNN,首先生成候选区域,然后对每个区域进行分类和边界框回归。单阶段算法,如 YOLO 和 SSD,一次性执行检
recommend-type

ActionContext.getContext().get()代码含义

ActionContext.getContext().get() 是从当前请求的上下文对象中获取指定的属性值的代码。在ActionContext.getContext()方法的返回值上,调用get()方法可以获取当前请求中指定属性的值。 具体来说,ActionContext是Struts2框架中的一个类,它封装了当前请求的上下文信息。在这个上下文对象中,可以存储一些请求相关的属性值,比如请求参数、会话信息、请求头、应用程序上下文等等。调用ActionContext.getContext()方法可以获取当前请求的上下文对象,而调用get()方法可以获取指定属性的值。 例如,可以使用 Acti
recommend-type

c++校园超市商品信息管理系统课程设计说明书(含源代码) (2).pdf

校园超市商品信息管理系统课程设计旨在帮助学生深入理解程序设计的基础知识,同时锻炼他们的实际操作能力。通过设计和实现一个校园超市商品信息管理系统,学生掌握了如何利用计算机科学与技术知识解决实际问题的能力。在课程设计过程中,学生需要对超市商品和销售员的关系进行有效管理,使系统功能更全面、实用,从而提高用户体验和便利性。 学生在课程设计过程中展现了积极的学习态度和纪律,没有缺勤情况,演示过程流畅且作品具有很强的使用价值。设计报告完整详细,展现了对问题的深入思考和解决能力。在答辩环节中,学生能够自信地回答问题,展示出扎实的专业知识和逻辑思维能力。教师对学生的表现予以肯定,认为学生在课程设计中表现出色,值得称赞。 整个课程设计过程包括平时成绩、报告成绩和演示与答辩成绩三个部分,其中平时表现占比20%,报告成绩占比40%,演示与答辩成绩占比40%。通过这三个部分的综合评定,最终为学生总成绩提供参考。总评分以百分制计算,全面评估学生在课程设计中的各项表现,最终为学生提供综合评价和反馈意见。 通过校园超市商品信息管理系统课程设计,学生不仅提升了对程序设计基础知识的理解与应用能力,同时也增强了团队协作和沟通能力。这一过程旨在培养学生综合运用技术解决问题的能力,为其未来的专业发展打下坚实基础。学生在进行校园超市商品信息管理系统课程设计过程中,不仅获得了理论知识的提升,同时也锻炼了实践能力和创新思维,为其未来的职业发展奠定了坚实基础。 校园超市商品信息管理系统课程设计的目的在于促进学生对程序设计基础知识的深入理解与掌握,同时培养学生解决实际问题的能力。通过对系统功能和用户需求的全面考量,学生设计了一个实用、高效的校园超市商品信息管理系统,为用户提供了更便捷、更高效的管理和使用体验。 综上所述,校园超市商品信息管理系统课程设计是一项旨在提升学生综合能力和实践技能的重要教学活动。通过此次设计,学生不仅深化了对程序设计基础知识的理解,还培养了解决实际问题的能力和团队合作精神。这一过程将为学生未来的专业发展提供坚实基础,使其在实际工作中能够胜任更多挑战。