python 用PINN方法求解一维Sod问题的预测解和真实解图的代码
时间: 2024-03-20 12:42:09 浏览: 34
以下是使用PINN方法求解一维Sod问题的预测解和真实解的Python代码。
首先,我们需要导入所需的库和模块:
```python
import numpy as np
import tensorflow as tf
from tensorflow import keras
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
```
然后,我们需要定义一些常数和参数,包括物理常数、网格大小、时间步长等等:
```python
# Define physical constants
gamma = 1.4
rho_l = 1.0
u_l = 0.0
p_l = 1.0
rho_r = 0.125
u_r = 0.0
p_r = 0.1
# Define computational parameters
N = 1000
T = 0.2
dt = 0.0001
```
接下来,我们需要定义一些函数,包括初始条件、PINN模型、PINN损失函数等等:
```python
# Define initial condition function
def initial_condition(x):
rho = np.where(x < 0, rho_l, rho_r)
u = np.where(x < 0, u_l, u_r)
p = np.where(x < 0, p_l, p_r)
return np.array([rho, rho*u, (p/(gamma-1) + 0.5*rho*u**2 + rho*u_l*(u-u_l))])
# Define PINN model
class PINN(keras.Model):
def __init__(self):
super(PINN, self).__init__()
self.dense1 = keras.layers.Dense(50, activation='tanh')
self.dense2 = keras.layers.Dense(50, activation='tanh')
self.dense3 = keras.layers.Dense(50, activation='tanh')
self.dense4 = keras.layers.Dense(3, activation=None)
def call(self, inputs):
x = inputs[:, 0:1]
t = inputs[:, 1:2]
X = tf.concat([x, t], axis=1)
H1 = self.dense1(X)
H2 = self.dense2(H1)
H3 = self.dense3(H2)
Y = self.dense4(H3)
return Y
# Define PINN loss function
def PINN_loss(model, x, t):
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
tape.watch(t)
X = tf.concat([x, t], axis=1)
Y = model(X)
rho = Y[:, 0:1]
u = Y[:, 1:2]/rho
p = (gamma-1)*(Y[:, 2:3] - 0.5*rho*u**2 - rho*u_l*(u-u_l))
e = p/(gamma-1) + 0.5*rho*u**2
c = tf.sqrt(gamma*p/rho)
rho_x = tape.gradient(rho, x)
u_x = tape.gradient(u, x)
e_x = tape.gradient(e, x)
rho_t = tape.gradient(rho, t)
u_t = tape.gradient(u, t)
e_t = tape.gradient(e, t)
rho_xx = tape.gradient(rho_x, x)
u_xx = tape.gradient(u_x, x)
e_xx = tape.gradient(e_x, x)
continuity = rho_t + (rho*u_x - rho_x*u)
momentum = u_t + u*u_x + (1/gamma)*p_x + (u*c**2)*((gamma-1)/(2*gamma)*e_xx - (1/gamma)*u_xx)
energy = e_t + u*e_x + p*u_x + u*c**2*(((gamma-1)/(2*gamma))*rho_xx - (1/gamma)*rho*u_xx)
loss = tf.reduce_mean(tf.square(continuity) + tf.square(momentum) + tf.square(energy))
return loss
```
接下来,我们需要生成网格,并将初始条件和PINN模型输入到PINN损失函数中:
```python
# Generate grid
x = np.linspace(-1, 1, N)[:, None]
t = np.linspace(0, T, int(T/dt)+1)[:, None]
# Generate initial condition
Y0 = initial_condition(x)
# Define PINN model
model = PINN()
# Compute PINN loss
loss = PINN_loss(model, x, t)
# Print PINN loss
print('PINN loss:', loss.numpy())
```
然后,我们需要定义优化器和训练模型:
```python
# Define optimizer
optimizer = tf.optimizers.Adam()
# Define training function
@tf.function
def train_step(model, x, t):
with tf.GradientTape() as tape:
loss = PINN_loss(model, x, t)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Train model
for i in range(10000):
loss = train_step(model, x, t)
if i % 1000 == 0:
print('Step:', i, 'PINN loss:', loss.numpy())
```
最后,我们可以使用PINN模型生成预测解和使用solve_ivp函数求解真实解,并将它们可视化:
```python
# Generate predicted solution
Y_pred = model.predict(tf.concat([x, t], axis=1))
# Generate true solution
sol = solve_ivp(lambda t, Y: Sod(Y, gamma), [0, T], Y0.flatten(), t_eval=t.flatten())
# Plot results
fig, ax = plt.subplots(ncols=3, figsize=(16, 4))
ax[0].plot(x, Y_pred[:, 0], label='PINN')
ax[0].plot(x, sol.y[0, :], label='True')
ax[0].set_xlabel('x')
ax[0].set_ylabel('Density')
ax[0].legend()
ax[1].plot(x, Y_pred[:, 1]/Y_pred[:, 0], label='PINN')
ax[1].plot(x, sol.y[1, :]/sol.y[0, :], label='True')
ax[1].set_xlabel('x')
ax[1].set_ylabel('Velocity')
ax[1].legend()
ax[2].plot(x, (gamma-1)*(Y_pred[:, 2]/Y_pred[:, 0] - 0.5*(Y_pred[:, 1]/Y_pred[:, 0])**2), label='PINN')
ax[2].plot(x, (gamma-1)*(sol.y[2, :]/sol.y[0, :] - 0.5*(sol.y[1, :]/sol.y[0, :])**2), label='True')
ax[2].set_xlabel('x')
ax[2].set_ylabel('Pressure')
ax[2].legend()
plt.show()
```
完整代码如下: