STM32F103与TM1637数码管控制实战

版权申诉
0 下载量 106 浏览量 更新于2024-10-28 收藏 2MB RAR 举报
资源摘要信息:"ALIENTEK MINISTM32 _TEST1_ TM1637.rar_STM32F103_TM1637_ministm32" 知识点概述: 本资源包包含了正点原子的战舰mini学习板上运行TM1637四位数码管显示程序的源代码及相关文件。战舰mini学习板是基于STM32F103系列微控制器设计的一款小型开发板,广泛应用于嵌入式学习和项目开发。TM1637是一款常用的4位数码管驱动芯片,通过I2C通信协议与微控制器进行连接。该资源包对于希望了解STM32F103微控制器与TM1637芯片交互编程的开发者来说是一个宝贵的资源。 详细知识点说明: 1. STM32F103微控制器:STM32F103是STMicroelectronics(意法半导体)公司生产的一款性能强大的Cortex-M3内核的微控制器。它具有丰富的外设接口,包括I2C、SPI、USART等,广泛应用于工业控制、汽车电子、医疗设备等领域。 2. TM1637四位数码管:TM1637是由Toshiba(东芝)公司生产的4位共阴极数码管驱动芯片,支持简单的I2C通信协议。它内置了数码管的段驱动和位驱动电路,通过两条线(时钟线和数据线)即可控制多个数码管。 3. I2C通信协议:I2C(Inter-Integrated Circuit)是由Philips(现为NXP)公司开发的一种串行通信总线。它采用两条线(一条数据线SDA和一条时钟线SCL)进行数据传输,支持多主机和多从机模式,因其接线简单、控制方便而在微控制器与外围设备间通信中被广泛使用。 4. 正点原子战舰mini学习板:正点原子(Soundaop)是一家专注于嵌入式系统教育和研发的高新技术企业。其推出的战舰mini学习板以STM32F103为主控芯片,板载了丰富的外设接口和功能模块,是学习STM32和进行项目实践的理想工具。 5. 程序编程与调试:资源包中所包含的程序代码展示了如何使用STM32F103的I2C接口与TM1637进行通信,以实现数码管的动态显示。开发者可以借此学习到如何操作STM32F103的硬件I2C接口,并通过代码了解如何发送控制指令以驱动数码管显示数字。 6. ALIENTEK MINISTM32 _TEST1_ TM1637的文件内容:该资源包中的文件列表显示了一个名为“ALIENTEK MINISTM32 _TEST1_ TM1638”的文件,可能是指示资源包包含的测试文件或例程。由此推测,资源包可能包括了具体的测试程序或者例程代码,用于演示和教学目的。 总结: 对于嵌入式系统开发者而言,掌握STM32F103微控制器与TM1637数码管驱动芯片的交互使用是一个非常实用的技能。正点原子的战舰mini学习板提供了一个良好的学习平台,而本资源包提供的程序代码则是一份宝贵的参考资料。通过本资源,开发者可以加深对STM32F103的I2C通信编程的理解,并能够在实际项目中应用TM1637进行显示功能的开发。对于希望深入学习STM32系列微控制器的工程师来说,这是一个不可多得的实践案例。

修改和补充下列代码得到十折交叉验证的平均每一折auc值和平均每一折aoc曲线,平均每一折分类报告以及平均每一折混淆矩阵 min_max_scaler = MinMaxScaler() X_train1, X_test1 = x[train_id], x[test_id] y_train1, y_test1 = y[train_id], y[test_id] # apply the same scaler to both sets of data X_train1 = min_max_scaler.fit_transform(X_train1) X_test1 = min_max_scaler.transform(X_test1) X_train1 = np.array(X_train1) X_test1 = np.array(X_test1) config = get_config() tree = gcForest(config) tree.fit(X_train1, y_train1) y_pred11 = tree.predict(X_test1) y_pred1.append(y_pred11 X_train.append(X_train1) X_test.append(X_test1) y_test.append(y_test1) y_train.append(y_train1) X_train_fuzzy1, X_test_fuzzy1 = X_fuzzy[train_id], X_fuzzy[test_id] y_train_fuzzy1, y_test_fuzzy1 = y_sampled[train_id], y_sampled[test_id] X_train_fuzzy1 = min_max_scaler.fit_transform(X_train_fuzzy1) X_test_fuzzy1 = min_max_scaler.transform(X_test_fuzzy1) X_train_fuzzy1 = np.array(X_train_fuzzy1) X_test_fuzzy1 = np.array(X_test_fuzzy1) config = get_config() tree = gcForest(config) tree.fit(X_train_fuzzy1, y_train_fuzzy1) y_predd = tree.predict(X_test_fuzzy1) y_pred.append(y_predd) X_test_fuzzy.append(X_test_fuzzy1) y_test_fuzzy.append(y_test_fuzzy1)y_pred = to_categorical(np.concatenate(y_pred), num_classes=3) y_pred1 = to_categorical(np.concatenate(y_pred1), num_classes=3) y_test = to_categorical(np.concatenate(y_test), num_classes=3) y_test_fuzzy = to_categorical(np.concatenate(y_test_fuzzy), num_classes=3) print(y_pred.shape) print(y_pred1.shape) print(y_test.shape) print(y_test_fuzzy.shape) # 深度森林 report1 = classification_report(y_test, y_prprint("DF",report1) report = classification_report(y_test_fuzzy, y_pred) print("DF-F",report) mse = mean_squared_error(y_test, y_pred1) rmse = math.sqrt(mse) print('深度森林RMSE:', rmse) print('深度森林Accuracy:', accuracy_score(y_test, y_pred1)) mse = mean_squared_error(y_test_fuzzy, y_pred) rmse = math.sqrt(mse) print('F深度森林RMSE:', rmse) print('F深度森林Accuracy:', accuracy_score(y_test_fuzzy, y_pred)) mse = mean_squared_error(y_test, y_pred) rmse = math.sqrt(mse)

2023-06-02 上传

修改和补充下列代码得到十折交叉验证的平均auc值和平均aoc曲线,平均分类报告以及平均混淆矩阵 min_max_scaler = MinMaxScaler() X_train1, X_test1 = x[train_id], x[test_id] y_train1, y_test1 = y[train_id], y[test_id] # apply the same scaler to both sets of data X_train1 = min_max_scaler.fit_transform(X_train1) X_test1 = min_max_scaler.transform(X_test1) X_train1 = np.array(X_train1) X_test1 = np.array(X_test1) config = get_config() tree = gcForest(config) tree.fit(X_train1, y_train1) y_pred11 = tree.predict(X_test1) y_pred1.append(y_pred11 X_train.append(X_train1) X_test.append(X_test1) y_test.append(y_test1) y_train.append(y_train1) X_train_fuzzy1, X_test_fuzzy1 = X_fuzzy[train_id], X_fuzzy[test_id] y_train_fuzzy1, y_test_fuzzy1 = y_sampled[train_id], y_sampled[test_id] X_train_fuzzy1 = min_max_scaler.fit_transform(X_train_fuzzy1) X_test_fuzzy1 = min_max_scaler.transform(X_test_fuzzy1) X_train_fuzzy1 = np.array(X_train_fuzzy1) X_test_fuzzy1 = np.array(X_test_fuzzy1) config = get_config() tree = gcForest(config) tree.fit(X_train_fuzzy1, y_train_fuzzy1) y_predd = tree.predict(X_test_fuzzy1) y_pred.append(y_predd) X_test_fuzzy.append(X_test_fuzzy1) y_test_fuzzy.append(y_test_fuzzy1)y_pred = to_categorical(np.concatenate(y_pred), num_classes=3) y_pred1 = to_categorical(np.concatenate(y_pred1), num_classes=3) y_test = to_categorical(np.concatenate(y_test), num_classes=3) y_test_fuzzy = to_categorical(np.concatenate(y_test_fuzzy), num_classes=3) print(y_pred.shape) print(y_pred1.shape) print(y_test.shape) print(y_test_fuzzy.shape) # 深度森林 report1 = classification_report(y_test, y_prprint("DF",report1) report = classification_report(y_test_fuzzy, y_pred) print("DF-F",report) mse = mean_squared_error(y_test, y_pred1) rmse = math.sqrt(mse) print('深度森林RMSE:', rmse) print('深度森林Accuracy:', accuracy_score(y_test, y_pred1)) mse = mean_squared_error(y_test_fuzzy, y_pred) rmse = math.sqrt(mse) print('F深度森林RMSE:', rmse) print('F深度森林Accuracy:', accuracy_score(y_test_fuzzy, y_pred)) mse = mean_squared_error(y_test, y_pred) rmse = math.sqrt(mse) print('F?深度森林RMSE:', rmse) print('F?深度森林Accuracy:', accuracy_score(y_test, y_pred))

2023-06-02 上传
2023-06-03 上传

import pandas as pd import numpy as np from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, LSTM import matplotlib.pyplot as plt # 读取CSV文件 data = pd.read_csv('77.csv', header=None) # 将数据集划分为训练集和测试集 train_size = int(len(data) * 0.7) train_data = data.iloc[:train_size, 1:2].values.reshape(-1,1) test_data = data.iloc[train_size:, 1:2].values.reshape(-1,1) # 对数据进行归一化处理 scaler = MinMaxScaler(feature_range=(0, 1)) train_data = scaler.fit_transform(train_data) test_data = scaler.transform(test_data) # 构建训练集和测试集 def create_dataset(dataset, look_back=1): X, Y = [], [] for i in range(len(dataset) - look_back): X.append(dataset[i:(i+look_back), 0]) Y.append(dataset[i+look_back, 0]) return np.array(X), np.array(Y) look_back = 3 X_train, Y_train = create_dataset(train_data, look_back) X_test, Y_test = create_dataset(test_data, look_back) # 转换为LSTM所需的输入格式 X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1)) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) # 构建LSTM模型 model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(look_back, 1))) model.add(LSTM(units=50)) model.add(Dense(units=1)) model.compile(optimizer='adam', loss='mean_squared_error') model.fit(X_train, Y_train, epochs=100, batch_size=32) # 预测测试集并进行反归一化处理 Y_pred = model.predict(X_test) Y_pred = scaler.inverse_transform(Y_pred) Y_test = scaler.inverse_transform(Y_test) # 输出RMSE指标 rmse = np.sqrt(np.mean((Y_pred - Y_test)**2)) print('RMSE:', rmse) # 绘制训练集真实值和预测值图表 train_predict = model.predict(X_train) train_predict = scaler.inverse_transform(train_predict) train_actual = scaler.inverse_transform(Y_train.reshape(-1, 1)) plt.plot(train_actual, label='Actual') plt.plot(train_predict, label='Predicted') plt.title('Training Set') plt.xlabel('Time (h)') plt.ylabel('kWh') plt.legend() plt.show() # 绘制测试集真实值和预测值图表 plt.plot(Y_test, label='Actual') plt.plot(Y_pred, label='Predicted') plt.title('Testing Set') plt.xlabel('Time (h)') plt.ylabel('kWh') plt.legend() plt.show()以上代码运行时报错,错误为ValueError: Expected 2D array, got 1D array instead: array=[-0.04967795 0.09031832 0.07590125]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.如何进行修改

2023-05-24 上传