没有合适的资源?快使用搜索试试~ 我知道了~
Domain-Adaptive Single-View 3D ReconstructionPedro O. PinheiroElement AINegar RostamzadehElement AISungjin AhnRutgers UniversityAbstractSingle-view 3D shape reconstruction is an important butchallenging problem, mainly for two reasons.First, asshape annotation is very expensive to acquire, current meth-ods rely on synthetic data, in which ground-truth 3D anno-tation is easy to obtain. However, this results in domainadaptation problem when applied to natural images. Thesecond challenge is that there are multiple shapes that canexplain a given 2D image.In this paper, we propose aframework to improve over these challenges using adver-sarial training. On one hand, we impose domain confusionbetween natural and synthetic image representations to re-duce the distribution gap. On the other hand, we imposethe reconstruction to be ‘realistic’ by forcing it to lie on a(learned) manifold of realistic object shapes. Our experi-ments show that these constraints improve performance bya large margin over baseline reconstruction models. Weachieve results competitive with the state of the art with amuch simpler architecture.1. IntroductionHumans can easily understand the underlying 3D struc-ture of scenes and objects from single images. This is ahallmark of a human visual system and it is an essentialstep towards higher level visual understanding. This is anextremely ill-posed problem because a single image doesnot contain enough information to allow 3D reconstruction.Therefore, a machine vision system needs to rely on priorsover the shape to infer 3D structure.Efficient and effective 3D prototyping plays an impor-tant role in many different fields, such as virtual/augmentedreality, architecture, robotics and 3D printing to name afew. Perhaps more importantly, studying 3D object rep-resentations could bring insights on how this informationis encoded in intermediate and higher-level visual cor-tices [53, 26].Traditional reconstruction methods rely on multiple im-ages of same object instance [28, 4, 6, 39, 14]. These meth-ods possess two strong limitations due to some key assump-tions [8]: (i) it requires a large number of views to achieveModelDomain Confusion?Shape priorFigure 1: We propose a framework for (natural) single-view 3D reconstruction exploiting adversarial training intwo ways. These constraints are achieved with additionalloss terms. We impose domain confusion between naturaland rendered images (top) and exploit shape priors to forcereconstructions to look realistic (bottom).reconstruction, (ii) the objects’ appearance are expected tobe Lambertian (i.e., non-reflective) and their albedos aresupposed to be non-uniform (i.e., rich of non-homogeneoustextures).Another way to achieve 3D reconstruction is to leverageknowledge from object’s appearance and shape. The mainadvantages of relying on shape priors is that we do not needto rely on accurate feature correspondences across differentviews. In this case 3D reconstruction can, in principle, bedone from a single-view 2D image (assuming the priors arerich enough).Recently, there has been a growing interest in learning-based approaches to tackle the problem of predicting thecanonical shape of an object from a single image [24, 8,16, 41, 54, 22, 48, 33, 44, 47, 49, 55]. Two technical ad-vances were responsible for this surge: (i) the easy accessto large-scale 3D Computer-Aided Design (CAD) repos-itories, such as ShapeNet [7], Pascal3D+ [52], Object-Net3D [51], Pix3D [40] and (ii) advances in deep learningtechniques [17].Most of these methods contain a similar high-level archi-7638tecture that regresses a 3D shape from (rendered) images:an encoder transforms a 2D image into a latent representa-tion and a decoder reconstructs the 3D representation. Theydifferentiate in how constraints from 3D world are imposed,e.g., [8, 54, 44] force multi-view consistency to learn the 3Drepresentation, while [47, 49] make use of 2.5D sketches.These approaches use a large number of CAD models toleverage shape priors (either making explicit use of 3D rep-resentation or not).Single-view 3D reconstruction is a very ill-posed prob-lem. In order to learn strong shape priors to infer 3D struc-ture, deep learning methods require a large amount of 3Dobject annotations. However, acquiring good 3D object an-notation from natural images is an extremely challengingendeavor. Most deep learning approaches, therefore, makeuse of synthetic images (which can be rendered easily if aproper 3D representation is given).Convolutional neural networks (CNNs) [29] are knownto perform sub-optimally when the data distribution of in-puts changes, a problem known in the computer vision liter-ature as domain shift [43]. For this reason, CNN-based 3Dreconstruction, trained on synthetic images, performs worsewhen applied to natural images.In this paper, we introduce a method to improve the per-formance of reconstruction models in natural images, whereproper 3D labels are very difficult to acquire. To achievethis goal, we impose two constraints on the network’s re-construction loss (expressed as additional loss terms) basedon shape prior learned from large 3D CAD repository (seeFigure 1).First, inspired by the domain adaptation literature [9, 15],we force the encoded 2D features to be invariant with re-spect to the domain they come from (rendered or natural).This way, a decoder trained on synthetic images will natu-rally perform better on real images. Second, we constraintthe encoded 2D features to lie in the manifold of realisticobjects shapes. This constraint forces the decoded 3D re-construction to look more realistic. These two loss termsare characterized through adversarial training [18, 15], anactive research topic.Our main contributions can be summarized as follows:(i) we propose a model and a loss function that exploitlearned shape priors to improve performance of natural im-age 3D reconstructions (using adversarial training in twodifferent ways), (ii) we show that this method boost perfor-mance in both voxel and point cloud representations, and(iii) the proposed method achieves results competitive withstate of the art on different datasets, with a much simplerarchitecture. Moreover, the proposed approach is indepen-dent of the encoder-decoder architecture and can be appliedto different single-view 3D reconstruction models.The rest of the paper is organized as follows: Section 276390介绍了相关工作,第3节描述了我们如何学习0形状先验并以两种不同的方式利用它来学习重建,并在不同的数据集上进行实验。我们在第5节中总结。02. 相关工作0单视图3D重建。传统的重建方法依赖于多个同一物体实例的图像来实现重建[28, 4, 6, 39,14]。最近,出现了从单个图像进行3D重建的数据驱动方法。这些方法大致可以分为两类:(i)明确使用3D结构的方法[16, 8, 48, 13, 19, 47,50]和(ii)使用其他信息源推断3D结构的方法[46, 24, 54, 22,20, 6, 44,55]。这些基于深度学习技术的方法通常共享相似的(高级)架构:一个编码器将2D(渲染的)图像映射到潜在表示,一个解码器将这个表示映射到3D对象。它们在施加3D世界约束的方式上有所不同。例如,[8, 54, 54, 44, 20, 22,27]强制多视图一致性来学习3D表示,而[46, 24,23]利用关键点和轮廓注释。其他方法[47,49]利用2.5D草图(表面法线、深度和轮廓)信息来改进预测。最近,Zhang,Zhang等人[56]考虑使用球面映射(除了2.5D草图)来学习3D表示。与大多数单视图3D重建工作相反,所提出的方法不使用规范形状:每个地面真实3D表示与2D训练样本在相同的视角上。这项工作是首次研究对未见类别进行形状重建,然而,它没有处理域适应问题。与所有这些方法相反,我们的方法除了RGB图像外不使用任何额外的信息。然而,除了渲染图像之外,我们还使用未标记的自然图像(易于获取)。我们注意到我们的贡献与编码器和解码器的架构无关(只要它们是可微分的),并且可以应用于许多这些更强大的编码器-解码器架构中。在实验中,我们展示了我们的方法在两个基线上的性能提升:一个简单的体素编码器-解码器架构和AtlasNet[19],一种基于点云表示的最先进的编码器-解码器架构。0域适应。获取自然图像的3D注释的困难迫使重建模型从渲染图像中学习。文献[43,9]中已经知道,如果模型应用于与训练时使用的分布不同的数据,其性能会下降。Ganin等人[15]通过强制域混淆(在两个域之间)来处理这个问题。76400通过对抗性目标来实现域适应。许多研究都在处理从合成到真实的图像分类的域适应问题[36, 37, 34,38]。在这项工作中,我们借鉴了域适应文献中的思想,以与这些先前的工作类似的方式施加域混淆。然而,我们考虑的是更具挑战性的3D重建问题,而不是简单的图像分类。0形状先验。从单视图图像重建3D结构需要关于物体形状的强先验。许多工作侧重于更好地捕捉逼真形状的流形。非深度方法侧重于低维参数模型[3, 24]。[16,30]的作者使用CNNs来学习2D渲染图像和3D形状的共同嵌入空间。其他方法依赖于生成建模来学习形状先验,例如,[50]使用深度置信网络来建模3D表示,[22, 6,11]考虑变分自编码器的变体,[48]使用GANs的变体[18]来捕捉形状的流形。在[31]中,作者提出了一种对抗自编码器,使用对抗训练技术将聚合后验匹配以进行变分推断。一些工作使用对抗训练进行单视图3D重建。Gwak等人[20]使用GANs来建模2D投影而不是3D形状。与我们的工作更相似的是,Wu,Zhang等人[49]使用对抗训练技术使重建看起来更自然。他们使用预训练的3DGAN[48]的鉴别器来确定一个形状是否逼真。这种方法在原则上与我们的贡献之一相似。然而,它的实现方式非常不同。鉴别器的输入是一个高维的3D形状,这使得训练非常不稳定。在我们的方法中,输入是一个低维空间中的单个向量。03. 方法0在我们的重建设置中,我们感兴趣的是从自然图像的规范视图x n ∈ In � R3×H×W预测体积表示vn ∈V。在我们的实验中,体积表示可以是体素(V � {0,1}dv×dv×dv)或点云(V �Rdv×3)。在训练时,我们可以访问大量的3DCAD对象,其中从分布pr(x,v)中提取了渲染图像和体积表示的成对数据Drend = {(xri,vi)}Nri=1,并且从不同分布pn(x,v)中提取了未标记的自然图像Dnat ={xnj}Nnj=1。我们注意到在训练过程中,模型可以访问自然图像(很容易获取),但无法获取它们的体素占用网格(非常难以获取)。所提出的方法,称为领域自适应重建网络(DAREC),由两个组件组成:(i)形状自编码器,负责学习一个丰富的3D对象的潜在表示和(ii)一个重建网络,负责从2D图像中推断出体素占用网格。0一个丰富的潜在表示和(ii)一个重建网络,负责从2D图像中推断出体素占用网格。形状自编码器由一个编码器E和一个解码器D组成。编码器将3D表示v ∈V映射到低维嵌入表示e ∈ E �Rde。解码器将潜在空间中的数据点映射回3D表示。体素形状自编码器通过最小化L2重建损失进行训练。点云形状自编码器通过最小化预测点和真实点之间的Chamfer距离进行训练。由于形状自编码器是使用真实的3D形状进行训练的,所以学到的潜在表示位于形状流形E中,包含“真实”形状的低维嵌入。这个组件在重建网络的训练之前进行训练。形状先验信息被隐式地编码在这个丰富的表示空间中。重建网络也具有编码器-解码器结构。由参数化的θf的编码器f负责将2D图像转换为一个嵌入空间,从该空间可以用解码器重建出3D表示。在推断时,重建网络是唯一用于预测给定自然测试图像的体素占用的网络。模型的训练方式使得编码器映射f: I→E可以同时实现:(i)在给定渲染图像的情况下重建3D表示,(ii)在图像来自的领域(合成或真实)方面不可区分,(iii)保持在“真实”形状流形上(通过形状自编码器学习)。为了施加这些约束,我们定义并添加了相关的项到损失函数中。图2显示了方法的概述。重建损失Lrec应用于渲染图像和3D表示(来自Drend)的元组。当考虑体素表示时,我们使用L2重建损失,当考虑点云表示时,我们使用Chamfer距离(如[13,19])。我们选择在这个训练阶段不更新解码器参数。这个设计选择与第三个损失所施加的约束相结合,迫使图像表示位于“真实”形状流形上。在本节的其余部分,我们将展示如何利用对抗训练技术和(学习到的)形状先验来提高自然图像的3D重建性能。03.1. 混淆的图像领域0众所周知,机器学习算法在领域转移方面存在问题[43]。因此,一个从渲染图像中训练的模型在应用于自然图像时表现不佳。理论研究[2,1]表明,一个良好的跨领域表示是一个能够提供输入领域的丰富潜在表示的表示。AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FvXgsYLbFtq1ZNO0Dc1ml2RWKEt/gxcPinj1B3nz35i2e9DWFwIP78yQmTdMpDDout9OYWV1bX2juFna2t7Z3SvvHzRMnGrGfRbLWLdCargUivsoUPJWojmNQsmb4ehmWm8+cW1ErB5wnPAgogMl+oJRtJZ/+5idTbrlilt1ZyLL4OVQgVz1bvmr04tZGnGFTFJj2p6bYJBRjYJJPil1UsMTykZ0wNsWFY24CbLZshNyYp0e6cfaPoVk5v6eyGhkzDgKbWdEcWgWa1Pzv1o7xf5VkAmVpMgVm3/UTyXBmEwvJz2hOUM5tkCZFnZXwoZUU4Y2n5INwVs8eRka51XP8v1FpXadx1GEIziGU/DgEmpwB3XwgYGAZ3iFN0c5L8678zFvLTj5zCH8kfP5A3R+jnA= AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FvXgsYLbFtq1ZNO0Dc1ml2RWKEt/gxcPinj1B3nz35i2e9DWFwIP78yQmTdMpDDout9OYWV1bX2juFna2t7Z3SvvHzRMnGrGfRbLWLdCargUivsoUPJWojmNQsmb4ehmWm8+cW1ErB5wnPAgogMl+oJRtJZ/+5idTbrlilt1ZyLL4OVQgVz1bvmr04tZGnGFTFJj2p6bYJBRjYJJPil1UsMTykZ0wNsWFY24CbLZshNyYp0e6cfaPoVk5v6eyGhkzDgKbWdEcWgWa1Pzv1o7xf5VkAmVpMgVm3/UTyXBmEwvJz2hOUM5tkCZFnZXwoZUU4Y2n5INwVs8eRka51XP8v1FpXadx1GEIziGU/DgEmpwB3XwgYGAZ3iFN0c5L8678zFvLTj5zCH8kfP5A3R+jnA= AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FvXgsYLbFtq1ZNO0Dc1ml2RWKEt/gxcPinj1B3nz35i2e9DWFwIP78yQmTdMpDDout9OYWV1bX2juFna2t7Z3SvvHzRMnGrGfRbLWLdCargUivsoUPJWojmNQsmb4ehmWm8+cW1ErB5wnPAgogMl+oJRtJZ/+5idTbrlilt1ZyLL4OVQgVz1bvmr04tZGnGFTFJj2p6bYJBRjYJJPil1UsMTykZ0wNsWFY24CbLZshNyYp0e6cfaPoVk5v6eyGhkzDgKbWdEcWgWa1Pzv1o7xf5VkAmVpMgVm3/UTyXBmEwvJz2hOUM5tkCZFnZXwoZUU4Y2n5INwVs8eRka51XP8v1FpXadx1GEIziGU/DgEmpwB3XwgYGAZ3iFN0c5L8678zFvLTj5zCH8kfP5A3R+jnA= AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FvXgsYLbFtq1ZNO0Dc1ml2RWKEt/gxcPinj1B3nz35i2e9DWFwIP78yQmTdMpDDout9OYWV1bX2juFna2t7Z3SvvHzRMnGrGfRbLWLdCargUivsoUPJWojmNQsmb4ehmWm8+cW1ErB5wnPAgogMl+oJRtJZ/+5idTbrlilt1ZyLL4OVQgVz1bvmr04tZGnGFTFJj2p6bYJBRjYJJPil1UsMTykZ0wNsWFY24CbLZshNyYp0e6cfaPoVk5v6eyGhkzDgKbWdEcWgWa1Pzv1o7xf5VkAmVpMgVm3/UTyXBmEwvJz2hOUM5tkCZFnZXwoZUU4Y2n5INwVs8eRka51XP8v1FpXadx1GEIziGU/DgEmpwB3XwgYGAZ3iFN0c5L8678zFvLTj5zCH8kfP5A3R+jnA= AAAB7nicbZBNSwMxEIZn61etX1WPXoJF8FR2RdBjUQ8eK9gPaJeSTadtaJJdkqxQlv4ILx4U8erv8ea/MW33oK0vBB7emSEzb5QIbqzvf3uFtfWNza3idmlnd2//oHx41DRxqhk2WCxi3Y6oQcEVNiy3AtuJRiojga1ofDurt55QGx6rRztJMJR0qPiAM2qd1brrZVwOp71yxa/6c5FVCHKoQK56r/zV7ccslagsE9SYTuAnNsyotpwJnJa6qcGEsjEdYsehohJNmM3XnZIz5/TJINbuKUvm7u+JjEpjJjJynZLakVmuzcz/ap3UDq7DjKsktajY4qNBKoiNyex20ucamRUTB5Rp7nYlbEQ1ZdYlVHIhBMsnr0Lzoho4fris1G7yOIpwAqdwDgFcQQ3uoQ4NYDCGZ3iFNy/xXrx372PRWvDymWP4I+/zB2VXj5g= AAAB7nicbZBNSwMxEIZn61etX1WPXoJF8FR2RdBjUQ8eK9gPaJeSTadtaJJdkqxQlv4ILx4U8erv8ea/MW33oK0vBB7emSEzb5QIbqzvf3uFtfWNza3idmlnd2//oHx41DRxqhk2WCxi3Y6oQcEVNiy3AtuJRiojga1ofDurt55QGx6rRztJMJR0qPiAM2qd1brrZVwOp71yxa/6c5FVCHKoQK56r/zV7ccslagsE9SYTuAnNsyotpwJnJa6qcGEsjEdYsehohJNmM3XnZIz5/TJINbuKUvm7u+JjEpjJjJynZLakVmuzcz/ap3UDq7DjKsktajY4qNBKoiNyex20ucamRUTB5Rp7nYlbEQ1ZdYlVHIhBMsnr0Lzoho4fris1G7yOIpwAqdwDgFcQQ3uoQ4NYDCGZ3iFNy/xXrx372PRWvDymWP4I+/zB2VXj5g= AAAB7nicbZBNSwMxEIZn61etX1WPXoJF8FR2RdBjUQ8eK9gPaJeSTadtaJJdkqxQlv4ILx4U8erv8ea/MW33oK0vBB7emSEzb5QIbqzvf3uFtfWNza3idmlnd2//oHx41DRxqhk2WCxi3Y6oQcEVNiy3AtuJRiojga1ofDurt55QGx6rRztJMJR0qPiAM2qd1brrZVwOp71yxa/6c5FVCHKoQK56r/zV7ccslagsE9SYTuAnNsyotpwJnJa6qcGEsjEdYsehohJNmM3XnZIz5/TJINbuKUvm7u+JjEpjJjJynZLakVmuzcz/ap3UDq7DjKsktajY4qNBKoiNyex20ucamRUTB5Rp7nYlbEQ1ZdYlVHIhBMsnr0Lzoho4fris1G7yOIpwAqdwDgFcQQ3uoQ4NYDCGZ3iFNy/xXrx372PRWvDymWP4I+/zB2VXj5g= AAAB7nicbZBNSwMxEIZn61etX1WPXoJF8FR2RdBjUQ8eK9gPaJeSTadtaJJdkqxQlv4ILx4U8erv8ea/MW33oK0vBB7emSEzb5QIbqzvf3uFtfWNza3idmlnd2//oHx41DRxqhk2WCxi3Y6oQcEVNiy3AtuJRiojga1ofDurt55QGx6rRztJMJR0qPiAM2qd1brrZVwOp71yxa/6c5FVCHKoQK56r/zV7ccslagsE9SYTuAnNsyotpwJnJa6qcGEsjEdYsehohJNmM3XnZIz5/TJINbuKUvm7u+JjEpjJjJynZLakVmuzcz/ap3UDq7DjKsktajY4qNBKoiNyex20ucamRUTB5Rp7nYlbEQ1ZdYlVHIhBMsnr0Lzoho4fris1G7yOIpwAqdwDgFcQQ3uoQ4NYDCGZ3iFNy/xXrx372PRWvDymWP4I+/zB2VXj5g= AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FkXwWMFtC+1asmnahmazSzIrlKW/wYsHRbz6g7z5b0zbPWjrC4GHd2bIzBsmUhh03W+nsLK6tr5R3Cxtbe/s7pX3DxomTjXjPotlrFshNVwKxX0UKHkr0ZxGoeTNcHQzrTefuDYiVg84TngQ0YESfcEoWsu/fczOJt1yxa26M5Fl8HKoQK56t/zV6cUsjbhCJqkxbc9NMMioRsEkn5Q6qeEJZSM64G2LikbcBNls2Qk5sU6P9GNtn0Iyc39PZDQyZhyFtjOiODSLtan5X62dYv8qyIRKUuSKzT/qp5JgTKaXk57QnKEcW6BMC7srYUOqKUObT8mG4C2evAyN86pn+f6iUrvO4yjCERzDKXhwCTW4gzr4wEDAM7zCm6OcF+fd+Zi3Fpx85hD+yPn8AXYGjnE= AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FkXwWMFtC+1asmnahmazSzIrlKW/wYsHRbz6g7z5b0zbPWjrC4GHd2bIzBsmUhh03W+nsLK6tr5R3Cxtbe/s7pX3DxomTjXjPotlrFshNVwKxX0UKHkr0ZxGoeTNcHQzrTefuDYiVg84TngQ0YESfcEoWsu/fczOJt1yxa26M5Fl8HKoQK56t/zV6cUsjbhCJqkxbc9NMMioRsEkn5Q6qeEJZSM64G2LikbcBNls2Qk5sU6P9GNtn0Iyc39PZDQyZhyFtjOiODSLtan5X62dYv8qyIRKUuSKzT/qp5JgTKaXk57QnKEcW6BMC7srYUOqKUObT8mG4C2evAyN86pn+f6iUrvO4yjCERzDKXhwCTW4gzr4wEDAM7zCm6OcF+fd+Zi3Fpx85hD+yPn8AXYGjnE= AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FkXwWMFtC+1asmnahmazSzIrlKW/wYsHRbz6g7z5b0zbPWjrC4GHd2bIzBsmUhh03W+nsLK6tr5R3Cxtbe/s7pX3DxomTjXjPotlrFshNVwKxX0UKHkr0ZxGoeTNcHQzrTefuDYiVg84TngQ0YESfcEoWsu/fczOJt1yxa26M5Fl8HKoQK56t/zV6cUsjbhCJqkxbc9NMMioRsEkn5Q6qeEJZSM64G2LikbcBNls2Qk5sU6P9GNtn0Iyc39PZDQyZhyFtjOiODSLtan5X62dYv8qyIRKUuSKzT/qp5JgTKaXk57QnKEcW6BMC7srYUOqKUObT8mG4C2evAyN86pn+f6iUrvO4yjCERzDKXhwCTW4gzr4wEDAM7zCm6OcF+fd+Zi3Fpx85hD+yPn8AXYGjnE= AAAB7HicbZBNSwMxEIZn61etX1WPXoJFEA9lVwQ9FkXwWMFtC+1asmnahmazSzIrlKW/wYsHRbz6g7z5b0zbPWjrC4GHd2bIzBsmUhh03W+nsLK6tr5R3Cxtbe/s7pX3DxomTjXjPotlrFshNVwKxX0UKHkr0ZxGoeTNcHQzrTefuDYiVg84TngQ0YESfcEoWsu/fczOJt1yxa26M5Fl8HKoQK56t/zV6cUsjbhCJqkxbc9NMMi
下载后可阅读完整内容,剩余1页未读,立即下载
cpongm
- 粉丝: 5
- 资源: 2万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- JavaScript DOM事件处理实战示例
- 全新JDK 1.8.122版本安装包下载指南
- Python实现《点燃你温暖我》爱心代码指南
- 创新后轮驱动技术的电动三轮车介绍
- GPT系列:AI算法模型发展的终极方向?
- 3dsmax批量渲染技巧与VR5插件兼容性
- 3DsMAX破碎效果插件:打造逼真碎片动画
- 掌握最简GPT模型:Andrej Karpathy带你走进AI新时代
- 深入解析XGBOOST在回归预测中的应用
- 深度解析机器学习:原理、算法与应用
- 360智脑企业内测开启,探索人工智能新场景应用
- 3dsmax墙砖地砖插件应用与特性解析
- 微软GPT-4助力大模型指令微调与性能提升
- OpenSARUrban-1200:平衡类别数据集助力算法评估
- SQLAlchemy 1.4.39 版本特性分析与应用
- 高颜值简约个人简历模版分享
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功