解决前端面试:JavaScript数字精度丢失问题详解

需积分: 0 0 下载量 137 浏览量 更新于2024-08-04 收藏 52KB DOCX 举报
前端大厂面试中,面试官经常会考察应聘者对JavaScript数字精度丢失的理解。这个问题涉及到浮点数运算在计算机中的特殊性,尤其是在使用Number类型进行计算时遇到的挑战。在JavaScript中,Number采用的是IEEE 754标准的64位双精度浮点数编码,这种存储结构允许归一化整数和小数,节省存储空间。 当你尝试执行像`0.1 + 0.2 === 0.3`这样的操作时,结果为`false`,原因在于计算机存储的浮点数并非无限精确。由于浮点数的存储方式限制,它们实际上是以近似值的形式存在,尤其是小数部分,无法精确地表示诸如0.1、0.2这样的无穷循环小数。这意味着,尽管从数学角度来说,0.1加0.2应该等于0.3,但在计算机内部的二进制表示中,由于有限的精度,它们的组合可能并不完全相等。 例如,`0.1`实际上存储的是`0.1000000000000000055511151231257827021181583404541015625`,而`0.2`则是`0.200000000000000011102230246251565404236316680908203125`。当这两个近似值相加,结果会因为舍入误差而稍微偏离0.3,导致比较不成立。 解决精度丢失问题的一种策略是理解和利用浮点数的存储机制。JavaScript中的Number类型虽然不能完美解决所有精度问题,但可以通过科学计数法(或称为指数记法)来简化表示。通过这种方式,小数点的位置固定下来,可以减少精度损失。例如,将浮点数转换为`2.7e-1`或`2.7e+0`这样的形式,可以减少由于指数范围导致的精度偏差。 在实际编码中,如果需要高精度计算,可以考虑使用专门的数学库,如`BigInt`类型(适用于大整数运算)或者第三方库如`decimal.js`(用于高精度小数运算)。同时,了解这些底层原理有助于前端工程师更好地处理复杂的数值逻辑,并避免不必要的精度问题。

def train_step(real_ecg, dim): noise = tf.random.normal(dim) for i in range(disc_steps): with tf.GradientTape() as disc_tape: generated_ecg = generator(noise, training=True) real_output = discriminator(real_ecg, training=True) fake_output = discriminator(generated_ecg, training=True) disc_loss = discriminator_loss(real_output, fake_output) gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables)) ### for tensorboard ### disc_losses.update_state(disc_loss) fake_disc_accuracy.update_state(tf.zeros_like(fake_output), fake_output) real_disc_accuracy.update_state(tf.ones_like(real_output), real_output) ####################### with tf.GradientTape() as gen_tape: generated_ecg = generator(noise, training=True) fake_output = discriminator(generated_ecg, training=True) gen_loss = generator_loss(fake_output) gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables)) ### for tensorboard ### gen_losses.update_state(gen_loss) ####################### def train(dataset, epochs, dim): for epoch in tqdm(range(epochs)): for batch in dataset: train_step(batch, dim) disc_losses_list.append(disc_losses.result().numpy()) gen_losses_list.append(gen_losses.result().numpy()) fake_disc_accuracy_list.append(fake_disc_accuracy.result().numpy()) real_disc_accuracy_list.append(real_disc_accuracy.result().numpy()) ### for tensorboard ### # with disc_summary_writer.as_default(): # tf.summary.scalar('loss', disc_losses.result(), step=epoch) # tf.summary.scalar('fake_accuracy', fake_disc_accuracy.result(), step=epoch) # tf.summary.scalar('real_accuracy', real_disc_accuracy.result(), step=epoch) # with gen_summary_writer.as_default(): # tf.summary.scalar('loss', gen_losses.result(), step=epoch) disc_losses.reset_states() gen_losses.reset_states() fake_disc_accuracy.reset_states() real_disc_accuracy.reset_states() ####################### # Save the model every 5 epochs # if (epoch + 1) % 5 == 0: # generate_and_save_ecg(generator, epochs, seed, False) # checkpoint.save(file_prefix = checkpoint_prefix) # Generate after the final epoch display.clear_output(wait=True) generate_and_save_ecg(generator, epochs, seed, False)

2023-06-08 上传

60/60 [==============================] - 19s 89ms/step - loss: 229.5776 - accuracy: 0.7818 - val_loss: 75.8205 - val_accuracy: 0.2848 Epoch 2/50 60/60 [==============================] - 5s 78ms/step - loss: 59.5195 - accuracy: 0.8323 - val_loss: 52.4355 - val_accuracy: 0.7152 Epoch 3/50 60/60 [==============================] - 5s 77ms/step - loss: 47.9256 - accuracy: 0.8453 - val_loss: 47.9466 - val_accuracy: 0.2848 Epoch 4/50 60/60 [==============================] - 5s 77ms/step - loss: 41.7355 - accuracy: 0.8521 - val_loss: 37.7279 - val_accuracy: 0.2848 Epoch 5/50 60/60 [==============================] - 5s 76ms/step - loss: 40.1783 - accuracy: 0.8505 - val_loss: 40.2293 - val_accuracy: 0.7152 Epoch 6/50 60/60 [==============================] - 5s 76ms/step - loss: 37.8785 - accuracy: 0.8781 - val_loss: 38.5298 - val_accuracy: 0.2848 Epoch 7/50 60/60 [==============================] - 5s 77ms/step - loss: 37.1490 - accuracy: 0.8786 - val_loss: 37.1918 - val_accuracy: 0.2848 Epoch 8/50 60/60 [==============================] - 5s 78ms/step - loss: 34.6709 - accuracy: 0.9156 - val_loss: 34.0621 - val_accuracy: 0.2765 Epoch 9/50 60/60 [==============================] - 5s 76ms/step - loss: 35.7891 - accuracy: 0.8849 - val_loss: 37.8741 - val_accuracy: 0.7152 Epoch 10/50 60/60 [==============================] - 5s 76ms/step - loss: 34.5359 - accuracy: 0.9141 - val_loss: 35.2664 - val_accuracy: 0.7152 Epoch 11/50 60/60 [==============================] - 5s 76ms/step - loss: 34.6172 - accuracy: 0.9016 - val_loss: 34.5135 - val_accuracy: 0.6258 Epoch 12/50 60/60 [==============================] - 5s 76ms/step - loss: 34.2331 - accuracy: 0.9083 - val_loss: 34.0945 - val_accuracy: 0.9168 Epoch 13/50 60/60 [==============================] - 5s 79ms/step - loss: 37.4175 - accuracy: 0.9000 - val_loss: 37.7885 - val_accuracy: 0.7152 16/16 - 0s - loss: 34.0621 - accuracy: 0.2765 - 307ms/epoch - 19ms/step Test accuracy: 0.27650728821754456

229 浏览量