wgan-gp网络中,生成器的loss一直增加,判别器的loss一直降低是为什么

时间: 2023-04-11 12:01:26 浏览: 137
这可能是由于生成器的设计不够好,或者训练数据集不够充分,导致生成器无法生成高质量的样本,而判别器则能够更好地区分真实样本和生成样本,从而导致生成器的loss增加,判别器的loss降低。可以尝试调整生成器的架构或者增加训练数据集来解决这个问题。
相关问题

wgan-gp代码

WGAN-GP是一种基于GAN的深度学习模型,它使用了梯度惩罚技术(Gradient Penalty,GP)来解决GAN训练过程中的不稳定性和模式崩溃问题。以下是一个简单的WGAN-GP的PyTorch实现代码: ```python import torch import torch.nn as nn import torch.optim as optim import numpy as np # 定义生成器和判别器的网络结构 class Generator(nn.Module): def __init__(self, latent_dim=100, img_shape=(1, 28, 28)): super(Generator, self).__init__() self.latent_dim = latent_dim self.img_shape = img_shape self.model = nn.Sequential( nn.Linear(self.latent_dim, 128), nn.LeakyReLU(0.2, inplace=True), nn.Linear(128, 256), nn.BatchNorm1d(256, 0.8), nn.LeakyReLU(0.2, inplace=True), nn.Linear(256, 512), nn.BatchNorm1d(512, 0.8), nn.LeakyReLU(0.2, inplace=True), nn.Linear(512, np.prod(self.img_shape)), nn.Tanh() ) def forward(self, z): img = self.model(z) img = img.view(img.size(0), *self.img_shape) return img class Discriminator(nn.Module): def __init__(self, img_shape=(1, 28, 28)): super(Discriminator, self).__init__() self.img_shape = img_shape self.model = nn.Sequential( nn.Linear(np.prod(self.img_shape), 512), nn.LeakyReLU(0.2, inplace=True), nn.Linear(512, 256), nn.LeakyReLU(0.2, inplace=True), nn.Linear(256, 1), ) def forward(self, img): img = img.view(img.size(0), -1) validity = self.model(img) return validity # 定义WGAN-GP模型 class WGAN_GP(nn.Module): def __init__(self, latent_dim=100, img_shape=(1, 28, 28), lambda_gp=10): super(WGAN_GP, self).__init__() self.generator = Generator(latent_dim, img_shape) self.discriminator = Discriminator(img_shape) self.lambda_gp = lambda_gp def forward(self, z): return self.generator(z) def gradient_penalty(self, real_images, fake_images): batch_size = real_images.size(0) # 随机生成采样权重 alpha = torch.rand(batch_size, 1, 1, 1).cuda() alpha = alpha.expand_as(real_images) # 生成采样图像 interpolated = (alpha * real_images) + ((1 - alpha) * fake_images) interpolated.requires_grad_(True) # 计算插值图像的判别器输出 prob_interpolated = self.discriminator(interpolated) # 计算梯度 gradients = torch.autograd.grad(outputs=prob_interpolated, inputs=interpolated, grad_outputs=torch.ones(prob_interpolated.size()).cuda(), create_graph=True, retain_graph=True)[0] # 计算梯度惩罚项 gradients = gradients.view(batch_size, -1) gradient_penalty = ((gradients.norm(2, dim=1) - 1) ** 2).mean() * self.lambda_gp return gradient_penalty # 定义训练函数 def train_wgan_gp(generator, discriminator, dataloader, num_epochs=200, batch_size=64, lr=0.0002, betas=(0.5, 0.999)): # 损失函数 adversarial_loss = torch.nn.MSELoss() # 优化器 optimizer_G = optim.Adam(generator.parameters(), lr=lr, betas=betas) optimizer_D = optim.Adam(discriminator.parameters(), lr=lr, betas=betas) for epoch in range(num_epochs): for i, (imgs, _) in enumerate(dataloader): batch_size = imgs.shape[0] # 配置设备 real_imgs = imgs.cuda() # 训练判别器 optimizer_D.zero_grad() # 随机生成噪声 z = torch.randn(batch_size, 100).cuda() # 生成假图像 fake_imgs = generator(z) # 计算判别器损失 loss_D = -torch.mean(discriminator(real_imgs)) + torch.mean(discriminator(fake_imgs)) # 计算梯度惩罚项 gp = discriminator.gradient_penalty(real_imgs, fake_imgs) loss_D += gp # 反向传播和优化 loss_D.backward() optimizer_D.step() # 限制判别器的参数范围 for p in discriminator.parameters(): p.data.clamp_(-0.01, 0.01) # 训练生成器 optimizer_G.zero_grad() # 随机生成噪声 z = torch.randn(batch_size, 100).cuda() # 生成假图像 fake_imgs = generator(z) # 计算生成器损失 loss_G = -torch.mean(discriminator(fake_imgs)) # 反向传播和优化 loss_G.backward() optimizer_G.step() # 打印损失 if i % 50 == 0: print("[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]" % (epoch, num_epochs, i, len(dataloader), loss_D.item(), loss_G.item())) ``` 在使用该代码时,需要先准备好数据集并将其转换为PyTorch的DataLoader格式,并调用train_wgan_gp函数进行训练。

WGAN-GP训练mnist数据集

以下是使用WGAN-GP训练mnist数据集的步骤: 1.导入必要的库和模块 ```python import tensorflow as tf from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt ``` 2.加载mnist数据集 ```python (train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data() train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32') train_images = (train_images - 127.5) / 127.5 # 将像素值归一化到[-1, 1]之间 BUFFER_SIZE = 60000 BATCH_SIZE = 256 train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) ``` 3.定义生成器和判别器模型 ```python def make_generator_model(): model = tf.keras.Sequential() model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,))) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Reshape((7, 7, 256))) assert model.output_shape == (None, 7, 7, 256) # 注意:batch size 没有限制 model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False)) assert model.output_shape == (None, 7, 7, 128) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False)) assert model.output_shape == (None, 14, 14, 64) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh')) assert model.output_shape == (None, 28, 28, 1) return model def make_discriminator_model(): model = tf.keras.Sequential() model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same', input_shape=[28, 28, 1])) model.add(layers.LeakyReLU()) model.add(layers.Dropout(0.3)) model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU()) model.add(layers.Dropout(0.3)) model.add(layers.Flatten()) model.add(layers.Dense(1)) return model ``` 4.定义损失函数和优化器 ```python generator_optimizer = tf.keras.optimizers.Adam(1e-4) discriminator_optimizer = tf.keras.optimizers.Adam(1e-4) def discriminator_loss(real_output, fake_output): real_loss = tf.reduce_mean(real_output) fake_loss = tf.reduce_mean(fake_output) return fake_loss - real_loss def generator_loss(fake_output): return -tf.reduce_mean(fake_output) ``` 5.定义训练函数 ```python @tf.function def train_step(images): noise = tf.random.normal([BATCH_SIZE, 100]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(noise, training=True) real_output = discriminator(images, training=True) fake_output = discriminator(generated_images, training=True) gen_loss = generator_loss(fake_output) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables) gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) ``` 6.训练模型 ```python EPOCHS = 100 noise_dim = 100 num_examples_to_generate = 16 # 我们将重复使用该种子(因此在动画 GIF 中更容易可视化进度) seed = tf.random.normal([num_examples_to_generate, noise_dim]) generator = make_generator_model() discriminator = make_discriminator_model() for epoch in range(EPOCHS): for image_batch in train_dataset: train_step(image_batch) # 每 15 个 epoch 生成一次图片 if epoch % 15 == 0: generate_and_save_images(generator, epoch + 1, seed) # 生成最终的图片 generate_and_save_images(generator, EPOCHS, seed) ``` 7.生成图片 ```python def generate_and_save_images(model, epoch, test_input): # 注意 training` 设定为 False # 因此,所有层都在推理模式下运行(batchnorm)。 predictions = model(test_input, training=False) fig = plt.figure(figsize=(4, 4)) for i in range(predictions.shape[0]): plt.subplot(4, 4, i+1) plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray') plt.axis('off') plt.savefig('image_at_epoch_{:04d}.png'.format(epoch)) plt.show() ```

相关推荐

对于Wasserstein GAN (WGAN)在PyTorch中的实现,你可以通过以下步骤来完成: 1. 导入所需的库: python import torch import torch.nn as nn import torch.optim as optim from torchvision.datasets import MNIST from torch.utils.data import DataLoader from torchvision import transforms 2. 定义生成器和判别器网络: python class Generator(nn.Module): def __init__(self, latent_dim, img_shape): super(Generator, self).__init__() self.model = nn.Sequential( nn.Linear(latent_dim, 128), nn.LeakyReLU(0.2), nn.Linear(128, 256), nn.BatchNorm1d(256), nn.LeakyReLU(0.2), nn.Linear(256, 512), nn.BatchNorm1d(512), nn.LeakyReLU(0.2), nn.Linear(512, img_shape), nn.Tanh() ) def forward(self, z): img = self.model(z) return img class Discriminator(nn.Module): def __init__(self, img_shape): super(Discriminator, self).__init__() self.model = nn.Sequential( nn.Linear(img_shape, 512), nn.LeakyReLU(0.2), nn.Linear(512, 256), nn.LeakyReLU(0.2), nn.Linear(256, 1) ) def forward(self, img): validity = self.model(img) return validity 3. 定义WGAN损失函数: python def wgan_loss(real_imgs, fake_imgs, critic_real, critic_fake): return torch.mean(critic_fake) - torch.mean(critic_real) 4. 初始化生成器、判别器和优化器: python latent_dim = 100 img_shape = 784 generator = Generator(latent_dim, img_shape) discriminator = Discriminator(img_shape) optimizer_G = optim.RMSprop(generator.parameters(), lr=0.00005) optimizer_D = optim.RMSprop(discriminator.parameters(), lr=0.00005) 5. 训练WGAN模型: python n_epochs = 200 clip_value = 0.01 for epoch in range(n_epochs): for i, (real_imgs, _) in enumerate(dataloader): batch_size = real_imgs.shape[0] real_imgs = real_imgs.view(batch_size, -1).to(device) # 训练判别器 optimizer_D.zero_grad() z = torch.randn(batch_size, latent_dim).to(device) fake_imgs = generator(z) critic_real = discriminator(real_imgs) critic_fake = discriminator(fake_imgs.detach()) d_loss = wgan_loss(real_imgs, fake_imgs, critic_real, critic_fake) d_loss.backward() optimizer_D.step() for p in discriminator.parameters(): p.data.clamp_(-clip_value, clip_value) # 训练生成器 if i % 5 == 0: optimizer_G.zero_grad() z = torch.randn(batch_size, latent_dim).to(device) fake_imgs = generator(z) critic_fake = discriminator(fake_imgs) g_loss = -torch.mean(critic_fake) g_loss.backward() optimizer_G.step() 这里只是一个简单的WGAN实现示例,你可以根据你的需求进行修改和扩展。记得在训练之前加载数据集,并将模型和数据移动到适当的设备上(如GPU)进行加速。
下面是一个简单的WGAN-GP模型的代码示例,可以用于生成11x11大小的灰度图像。这里使用PyTorch实现。 python import torch import torch.nn as nn import torch.optim as optim import torchvision.transforms as transforms from torchvision.utils import save_image from torch.utils.data import DataLoader from torch.autograd import grad # 定义生成器和判别器的网络结构 class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.fc = nn.Linear(100, 256) self.conv = nn.Sequential( nn.ConvTranspose2d(16, 8, 4, stride=2, padding=1), nn.BatchNorm2d(8), nn.ReLU(), nn.ConvTranspose2d(8, 1, 4, stride=2, padding=1), nn.Tanh() ) def forward(self, z): x = self.fc(z) x = x.view(-1, 16, 4, 4) x = self.conv(x) return x class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.conv = nn.Sequential( nn.Conv2d(1, 8, 4, stride=2, padding=1), nn.BatchNorm2d(8), nn.LeakyReLU(), nn.Conv2d(8, 16, 4, stride=2, padding=1), nn.BatchNorm2d(16), nn.LeakyReLU() ) self.fc = nn.Linear(256, 1) def forward(self, x): x = self.conv(x) x = x.view(-1, 256) x = self.fc(x) return x # 定义WGAN-GP的损失函数 def wgan_gp_loss(real, fake, discriminator, device): # 计算判别器对真实图像和生成图像的输出 real_out = discriminator(real) fake_out = discriminator(fake) # 计算WGAN-GP损失 d_loss = fake_out.mean() - real_out.mean() epsilon = torch.rand(real.shape[0], 1, 1, 1).to(device) interpolated = epsilon * real + (1 - epsilon) * fake interpolated_out = discriminator(interpolated) gradients = grad(outputs=interpolated_out, inputs=interpolated, grad_outputs=torch.ones_like(interpolated_out), create_graph=True, retain_graph=True, only_inputs=True)[0] gradient_penalty = ((gradients.norm(2, dim=1) - 1) ** 2).mean() * 10 d_loss += gradient_penalty return d_loss # 设置训练参数和超参数 batch_size = 64 lr = 0.0001 z_dim = 100 n_epochs = 200 clip_value = 0.01 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # 加载数据集 transform = transforms.Compose([ transforms.Resize(11), transforms.ToTensor(), transforms.Normalize(mean=[0.5], std=[0.5]) ]) dataset = DataLoader(torchvision.datasets.MNIST(root='./data', train=True, transform=transform, download=True), batch_size=batch_size, shuffle=True) # 初始化生成器和判别器 generator = Generator().to(device) discriminator = Discriminator().to(device) # 定义优化器 g_optimizer = optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999)) d_optimizer = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999)) # 训练WGAN-GP模型 for epoch in range(n_epochs): for i, (real_images, _) in enumerate(dataset): real_images = real_images.to(device) # 训练判别器 for j in range(5): # 生成随机噪声 z = torch.randn(real_images.shape[0], z_dim).to(device) # 生成假图像 fake_images = generator(z) # 更新判别器 d_optimizer.zero_grad() d_loss = wgan_gp_loss(real_images, fake_images, discriminator, device) d_loss.backward() d_optimizer.step() # 截断判别器的参数 for p in discriminator.parameters(): p.data.clamp_(-clip_value, clip_value) # 训练生成器 z = torch.randn(real_images.shape[0], z_dim).to(device) fake_images = generator(z) g_optimizer.zero_grad() g_loss = -discriminator(fake_images).mean() g_loss.backward() g_optimizer.step() # 输出训练信息 if i % 100 == 0: print(f"Epoch [{epoch}/{n_epochs}] Batch [{i}/{len(dataset)}] D loss: {d_loss:.4f} | G loss: {g_loss:.4f}") # 保存生成的图像 with torch.no_grad(): z = torch.randn(1, z_dim).to(device) fake_image = generator(z).squeeze() save_image(fake_image, f"images/{epoch}.png", normalize=True) 这个代码示例中,我们定义了一个生成器和一个判别器,它们分别用于生成和判别11x11的灰度图像。在训练过程中,我们使用了WGAN-GP损失函数,并使用Adam优化器对生成器和判别器进行优化。在每个epoch结束时,我们生成一个随机噪声向量,并使用生成器生成一个假图像,并将其保存为一个PNG文件。
生成对抗网络的损失函数是通过对抗训练中的生成器和判别器之间的竞争来定义的。在WGAN中,生成器的损失函数可以通过以下方式计算:g_loss = adverisal_loss(discriminator(gen_imgs), real)。其中,adverisal_loss是判别器的损失函数,gen_imgs是生成器生成的图像,real是真实的图像。生成器的损失函数是通过将生成器生成的图像输入判别器,并将其与真实图像进行比较来计算的。 在WGAN-GP中,还引入了梯度惩罚的方法以替代权值剪裁。梯度惩罚的目的是确保函数在任何位置的梯度都小于1,以避免梯度爆炸和梯度消失的问题。通过在目标函数中添加惩罚项,根据网络的输入来限制对应判别器的输出。具体而言,WGAN-GP使用了梯度惩罚方法来解决WGAN中的问题,其中对判别器的输出进行了限制。 总结起来,生成对抗网络的损失函数可以通过对判别器和生成器之间的竞争来定义。在WGAN中,使用了adverisal_loss作为生成器的损失函数,并通过梯度剪裁或梯度惩罚的方法来改进网络的性能。123 #### 引用[.reference_title] - *1* [对抗生成网络(GAN)中的损失函数](https://blog.csdn.net/L888666Q/article/details/127793598)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* *3* [生成对抗网络(四)-----------WGAN-GP](https://blog.csdn.net/gyt15663668337/article/details/90271265)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
以下是一个简单的 WGAN 代码示例,用于解决结构化数据的样本不均衡问题: python import tensorflow as tf from tensorflow.keras import layers # 定义生成器模型 def make_generator_model(): model = tf.keras.Sequential() model.add(layers.Dense(256, use_bias=False, input_shape=(100,))) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Dense(512, use_bias=False)) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Dense(1024, use_bias=False)) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Dense(1, activation='tanh')) return model # 定义判别器模型 def make_discriminator_model(): model = tf.keras.Sequential() model.add(layers.Dense(1024, input_shape=(784,))) model.add(layers.LeakyReLU()) model.add(layers.Dropout(.3)) model.add(layers.Dense(512)) model.add(layers.LeakyReLU()) model.add(layers.Dropout(.3)) model.add(layers.Dense(256)) model.add(layers.LeakyReLU()) model.add(layers.Dropout(.3)) model.add(layers.Dense(1)) return model # 定义 WGAN 模型 class WGAN(tf.keras.Model): def __init__( self, discriminator, generator, latent_dim, discriminator_extra_steps=3, gp_weight=10., ): super(WGAN, self).__init__() self.discriminator = discriminator self.generator = generator self.latent_dim = latent_dim self.d_steps = discriminator_extra_steps self.gp_weight = gp_weight # 定义判别器损失函数 def discriminator_loss(self, real, fake, interpolated): real_loss = tf.reduce_mean(real) fake_loss = tf.reduce_mean(fake) gradient_penalty = self.gradient_penalty(interpolated) return fake_loss - real_loss + gradient_penalty * self.gp_weight # 定义生成器损失函数 def generator_loss(self, fake): return -tf.reduce_mean(fake) # 定义梯度惩罚函数 def gradient_penalty(self, interpolated): with tf.GradientTape() as tape: tape.watch(interpolated) pred = self.discriminator(interpolated) gradients = tape.gradient(pred, interpolated) norm = tf.norm(tf.reshape(gradients, [tf.shape(gradients)[], -1]), axis=1) gp = tf.reduce_mean((norm - 1.) ** 2) return gp # 定义训练步骤 @tf.function def train_step(self, real_data): # 生成随机噪声 batch_size = tf.shape(real_data)[] noise = tf.random.normal([batch_size, self.latent_dim]) # 训练判别器 for i in range(self.d_steps): with tf.GradientTape() as tape: fake_data = self.generator(noise) interpolated = real_data + tf.random.uniform( tf.shape(real_data), minval=., maxval=1. ) * (fake_data - real_data) real_pred = self.discriminator(real_data) fake_pred = self.discriminator(fake_data) disc_loss = self.discriminator_loss(real_pred, fake_pred, interpolated) grads = tape.gradient(disc_loss, self.discriminator.trainable_weights) self.discriminator.optimizer.apply_gradients( zip(grads, self.discriminator.trainable_weights) ) # 训练生成器 with tf.GradientTape() as tape: fake_data = self.generator(noise) fake_pred = self.discriminator(fake_data) gen_loss = self.generator_loss(fake_pred) grads = tape.gradient(gen_loss, self.generator.trainable_weights) self.generator.optimizer.apply_gradients( zip(grads, self.generator.trainable_weights) ) return {"d_loss": disc_loss, "g_loss": gen_loss} # 加载数据集 (train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data() train_images = train_images.reshape(train_images.shape[], 784).astype("float32") train_images = (train_images - 127.5) / 127.5 # 将像素值归一化到[-1, 1]之间 # 定义超参数 BUFFER_SIZE = 60000 BATCH_SIZE = 64 EPOCHS = 50 LATENT_DIM = 100 # 创建数据集 train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) # 创建模型 generator = make_generator_model() discriminator = make_discriminator_model() # 定义优化器 generator_optimizer = tf.keras.optimizers.Adam(1e-4) discriminator_optimizer = tf.keras.optimizers.Adam(1e-4) # 创建 WGAN 模型 wgan = WGAN( discriminator=discriminator, generator=generator, latent_dim=LATENT_DIM, discriminator_extra_steps=3, gp_weight=10., ) # 训练模型 for epoch in range(EPOCHS): for real_data in train_dataset: wgan.train_step(real_data) # 打印损失 d_loss = wgan.trainable_variables["d_loss"].numpy() g_loss = wgan.trainable_variables["g_loss"].numpy() print(f"Epoch {epoch+1}, Discriminator loss: {d_loss}, Generator loss: {g_loss}") 注意:这只是一个简单的示例,实际应用中需要根据具体问题进行调整和优化。

最新推荐

大数据可视化平台建设综合解决方案共101页.pptx

大数据可视化平台建设综合解决方案共101页.pptx

智慧公路大数据运营中心解决方案.pptx

智慧公路大数据运营中心解决方案.pptx

面试必问的 MySQL 四种隔离级别,看完吊打面试官.docx

你真的会写一手好SQL吗?你真的深入了解数据库吗?你真的对MYSQL很理解吗?来自一线大厂资深数据库开发工程师的分享,纯粹干货,值得拥有。

一次非常有趣的 SQL 优化经历.docx

你真的会写一手好SQL吗?你真的深入了解数据库吗?你真的对MYSQL很理解吗?来自一线大厂资深数据库开发工程师的分享,纯粹干货,值得拥有。

固定资产预算表.xls

固定资产预算表.xls

基于51单片机的usb键盘设计与实现(1).doc

基于51单片机的usb键盘设计与实现(1).doc

"海洋环境知识提取与表示:专用导航应用体系结构建模"

对海洋环境知识提取和表示的贡献引用此版本:迪厄多娜·察查。对海洋环境知识提取和表示的贡献:提出了一个专门用于导航应用的体系结构。建模和模拟。西布列塔尼大学-布雷斯特,2014年。法语。NNT:2014BRES0118。电话:02148222HAL ID:电话:02148222https://theses.hal.science/tel-02148222提交日期:2019年HAL是一个多学科的开放存取档案馆,用于存放和传播科学研究论文,无论它们是否被公开。论文可以来自法国或国外的教学和研究机构,也可以来自公共或私人研究中心。L’archive ouverte pluridisciplinaire论文/西布列塔尼大学由布列塔尼欧洲大学盖章要获得标题西布列塔尼大学博士(博士)专业:计算机科学海洋科学博士学院对海洋环境知识的提取和表示的贡献体系结构的建议专用于应用程序导航。提交人迪厄多内·察察在联合研究单位编制(EA编号3634)海军学院

react中antd组件库里有个 rangepicker 我需要默认显示的当前月1号到最后一号的数据 要求选择不同月的时候 开始时间为一号 结束时间为选定的那个月的最后一号

你可以使用 RangePicker 的 defaultValue 属性来设置默认值。具体来说,你可以使用 moment.js 库来获取当前月份和最后一天的日期,然后将它们设置为 RangePicker 的 defaultValue。当用户选择不同的月份时,你可以在 onChange 回调中获取用户选择的月份,然后使用 moment.js 计算出该月份的第一天和最后一天,更新 RangePicker 的 value 属性。 以下是示例代码: ```jsx import { useState } from 'react'; import { DatePicker } from 'antd';

基于plc的楼宇恒压供水系统学位论文.doc

基于plc的楼宇恒压供水系统学位论文.doc

"用于对齐和识别的3D模型计算机视觉与模式识别"

表示用于对齐和识别的3D模型马蒂厄·奥布里引用此版本:马蒂厄·奥布里表示用于对齐和识别的3D模型计算机视觉与模式识别[cs.CV].巴黎高等师范学校,2015年。英语NNT:2015ENSU0006。电话:01160300v2HAL Id:tel-01160300https://theses.hal.science/tel-01160300v22018年4月11日提交HAL是一个多学科的开放获取档案馆,用于存放和传播科学研究文件,无论它们是否已这些文件可能来自法国或国外的教学和研究机构,或来自公共或私人研究中心。L’archive ouverte pluridisciplinaire博士之路博士之路博士之路在获得等级时,DOCTEURDE L'ÉCOLE NORMALE SUPERIEURE博士学校ED 386:巴黎中心数学科学Discipline ou spécialité:InformatiquePrésentée et soutenue par:马蒂厄·奥布里le8 may 2015滴度表示用于对齐和识别的Unité derechercheThèse dirigée par陪审团成员équipe WILLOW(CNRS/ENS/INRIA UMR 8548)慕尼黑工业大学(TU Munich�