Given an array of ints length 3, return an array with the elements "rotated left" so {1, 2, 3} yields {2, 3, 1}.
时间: 2024-02-02 21:04:49 浏览: 127
以下是 Python 代码实现:
```python
def rotate_left3(nums):
return [nums[1], nums[2], nums[0]]
```
这个函数接受一个长度为 3 的整数数组 `nums` 作为输入,返回一个新的数组,其中元素“向左旋转了一次”,即原来的第一个元素成为新数组的最后一个元素,原来的第二个元素成为新数组的第一个元素,原来的第三个元素成为新数组的第二个元素。
例如,`rotate_left3([1, 2, 3])` 的返回值为 `[2, 3, 1]`。
相关问题
3.4 Pair Interaction Feature The interaction pattern between two individuals is encoded by a spatial descriptor with view invariant relative pose encoding. Given the 3D locations of two individual detec- tions zi,zj and two pose features pi,pj, we represent the pairwise relationship using view normalization, pose co-occurrence encoding, semantic compression and a spatial histogram (see Fig. 5 for illustration). The view normalization is performed by rotating the two people in 3D space by θ with respect to their midpoint, making their connecting line perpendicular to the cam- era view point. In this step, the pose features are also shifted accordingly (e.g. if θ = 45‘, shift 1 dimension with a cycle). Then, the co-occurrence feature is obtained by building a 2-dimensional matrix in which each element (r, c) corresponds to min(pi(r), pj (c)). Although the feature is view invariant, there are still elements in the matrix that deliver the same semantic concepts (e.g. left-left and right-right). To reduce such unnecessary variance and obtain a compact representation, we perform another transformation by multiplying a semantic compression matrix Sc to the vector form of the co-occurrence feature. The matrix Sc is learned offline by enumerating all possible configurations of view points and grouping the pairs that are equivalent when rotated by 180 degrees. Finally, we obtain the pair interaction descriptor by building a spatial histogram based on the 3D distance between the two (bin centers at 0.2, 0.6, 2.0 and 6.5 m). Here, we use linear interpolation similarly to contextual feature in Sec. 3.3. Given the interac- tion descriptor for each pair, we represent the interaction feature φxx(xi,xj) using the confidence value from an SVM classifier trained on a dictionary of interaction labels Y.什么意思
这段文本介绍了一种编码两个个体之间交互模式的方法,通过具有视角不变的相对姿势编码的空间描述符。给定两个个体检测的3D位置zi、zj和两个姿势特征pi、pj,我们使用视角归一化、姿势共现编码、语义压缩和空间直方图来表示两者之间的配对关系(如图5所示)。
视角归一化是通过将两个人在3D空间中绕着它们的中点旋转θ来进行的,使它们的连接线垂直于相机视点。在这一步中,姿势特征也相应地进行了平移(例如,如果θ=45度,则进行1维周期性平移)。然后,通过构建一个二维矩阵来获取共现特征,其中每个元素(r,c)对应于min(pi(r),pj(c))。虽然该特征是视角不变的,但矩阵中仍然存在传递相同语义概念的元素(例如,left-left和right-right)。为了减少这种不必要的差异并获得紧凑的表示,我们通过将语义压缩矩阵Sc乘以共现特征的向量形式来进行另一种转换。矩阵Sc是离线学习的,通过枚举所有可能的视点配置并将旋转180度后等效的配对进行分组来得到。最后,我们通过基于两者之间的3D距离构建空间直方图(bin中心为0.2、0.6、2.0和6.5米)来获得配对交互描述符。在这里,我们使用线性插值,类似于第3.3节中的上下文特征。给定每个配对的交互描述符,我们使用在交互标签字典Y上训练的SVM分类器的置信度值来表示交互特征φxx(xi, xj)。
from PIL import Image, ImageDraw # 将图片平移并旋转 gray2 = Image.fromarray(src) width, height = gray2.size # 计算中心点和X轴角度 center = (max_point[0], max_point[1]) angle = np.arctan2(point2[1] - max_point[1], point2[0] - max_point[0]) * 180 / np.pi img_translated = gray2.transform((width, height), Image.AFFINE, (1, 0, center[0] - width/2, 0, 1, center[1] - height/2), resample=Image.BICUBIC) img_translated_rotated = img_translated.rotate(angle, resample=Image.BICUBIC, expand=True) #img_translated_rotated.show() #裁剪 img4 = Image.fromarray(src) width1, height1 = img4.size width2, height2 = img_translated_rotated.size left = (width2 - width1 )/2 top = (height2 - height1 )/2 right = (width2 - width1 )/2 + width1 bottom = (height2 - height1 )/2 + height1 cropped_image = img_translated_rotated.crop((left, top, right, bottom )) import cv2 GRID_STEP = distance/2 # 设置1010栅格(暂时尝试) grid_num_x = 10 grid_num_y = 10 def transform_point_set(points, max_point, distance, angle): # 平移向量 translation_vector = np.array([distance * np.cos(anglenp.pi/180), distance * np.sin(anglenp.pi/180)]) # 旋转矩阵 rotation_matrix = np.array([[np.cos(anglenp.pi/180), -np.sin(anglenp.pi/180)], [np.sin(anglenp.pi/180), np.cos(angle*np.pi/180)]]) # 将点集转换为 numpy 数组 point_array = np.array(points) max_point_array = np.array(max_point) # 对点集进行平移和旋转 point_array = (point_array - max_point_array) @ rotation_matrix + max_point_array + translation_vector # 将 numpy 数组转换为列表 points2 = point_array.tolist() return points2 points2 = transform_point_set(points, max_point, distance, angle) print(points2) #第2.5部分(用作确认检验) from PIL import Image, ImageDraw #裁剪 img4 = Image.fromarray(src) width1, height1 = img4.size width2, height2 = img_translated_rotated.size left = (width2 - width1 )/2 top = (height2 - height1 )/2 right = (width2 - width1 )/2 + width1 bottom = (height2 - height1 )/2 + height1 cropped_image = img_translated_rotated.crop((left, top, right, bottom )) # 导入图片() img_array = np.asarray(cropped_image) img = Image.fromarray(img_array) draw = ImageDraw.Draw(img) for point in point
这是一段 Python 代码,主要是对图片进行平移和旋转,并进行裁剪,然后将一组点坐标进行平移和旋转后输出。具体实现过程中,代码使用了 PIL 库和 OpenCV 库。首先通过 PIL 库中的 Image 类将源图片转换为灰度图像 gray2,然后计算出中心点和旋转角度,通过 transform 和 rotate 方法对图片进行平移和旋转操作。接着通过 crop 方法进行裁剪,将裁剪后的图片转换为 numpy 数组 img_array。最后使用 PIL 库中的 ImageDraw 类和 for 循环对一组点坐标进行平移和旋转,并输出平移和旋转后的点坐标。
阅读全文