kernel_regularizer=regularizers.l2(penalty)

时间: 2023-09-21 08:04:01 浏览: 31
This line of code specifies the L2 regularization penalty for the kernel weights in a neural network. Regularization is a technique used to prevent overfitting, which occurs when a model becomes too complex and can fit the training data too well, leading to poor performance on new, unseen data. L2 regularization adds a penalty term to the loss function being optimized that penalizes large weights, encouraging the model to use smaller weights and reducing overfitting. The `penalty` parameter specifies the strength of the regularization, with larger values leading to stronger regularization. The `kernel_regularizer` parameter is typically set when defining the layers of a neural network using a framework like Keras or TensorFlow.

相关推荐

优化这段代码 for j in n_components: estimator = PCA(n_components=j,random_state=42) pca_X_train = estimator.fit_transform(X_standard) pca_X_test = estimator.transform(X_standard_test) cvx = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) cost = [-5, -3, -1, 1, 3, 5, 7, 9, 11, 13, 15] gam = [3, 1, -1, -3, -5, -7, -9, -11, -13, -15] parameters =[{'kernel': ['rbf'], 'C': [2x for x in cost],'gamma':[2x for x in gam]}] svc_grid_search=GridSearchCV(estimator=SVC(random_state=42), param_grid=parameters,cv=cvx,scoring=scoring,verbose=0) svc_grid_search.fit(pca_X_train, train_y) param_grid = {'penalty':['l1', 'l2'], "C":[0.00001,0.0001,0.001, 0.01, 0.1, 1, 10, 100, 1000], "solver":["newton-cg", "lbfgs","liblinear","sag","saga"] # "algorithm":['auto', 'ball_tree', 'kd_tree', 'brute'] } LR_grid = LogisticRegression(max_iter=1000, random_state=42) LR_grid_search = GridSearchCV(LR_grid, param_grid=param_grid, cv=cvx ,scoring=scoring,n_jobs=10,verbose=0) LR_grid_search.fit(pca_X_train, train_y) estimators = [ ('lr', LR_grid_search.best_estimator_), ('svc', svc_grid_search.best_estimator_), ] clf = StackingClassifier(estimators=estimators, final_estimator=LinearSVC(C=5, random_state=42),n_jobs=10,verbose=0) clf.fit(pca_X_train, train_y) estimators = [ ('lr', LR_grid_search.best_estimator_), ('svc', svc_grid_search.best_estimator_), ] param_grid = {'final_estimator':[LogisticRegression(C=0.00001),LogisticRegression(C=0.0001), LogisticRegression(C=0.001),LogisticRegression(C=0.01), LogisticRegression(C=0.1),LogisticRegression(C=1), LogisticRegression(C=10),LogisticRegression(C=100), LogisticRegression(C=1000)]} Stacking_grid =StackingClassifier(estimators=estimators,) Stacking_grid_search = GridSearchCV(Stacking_grid, param_grid=param_grid, cv=cvx, scoring=scoring,n_jobs=10,verbose=0) Stacking_grid_search.fit(pca_X_train, train_y) var = Stacking_grid_search.best_estimator_ train_pre_y = cross_val_predict(Stacking_grid_search.best_estimator_, pca_X_train,train_y, cv=cvx) train_res1=get_measures_gridloo(train_y,train_pre_y) test_pre_y = Stacking_grid_search.predict(pca_X_test) test_res1=get_measures_gridloo(test_y,test_pre_y) best_pca_train_aucs.append(train_res1.loc[:,"AUC"]) best_pca_test_aucs.append(test_res1.loc[:,"AUC"]) best_pca_train_scores.append(train_res1) best_pca_test_scores.append(test_res1) train_aucs.append(np.max(best_pca_train_aucs)) test_aucs.append(best_pca_test_aucs[np.argmax(best_pca_train_aucs)].item()) train_scores.append(best_pca_train_scores[np.argmax(best_pca_train_aucs)]) test_scores.append(best_pca_test_scores[np.argmax(best_pca_train_aucs)]) pca_comp.append(n_components[np.argmax(best_pca_train_aucs)]) print("n_components:") print(n_components[np.argmax(best_pca_train_aucs)])

优化这段代码的几个方面: 1. 并行化:在进行网格搜索时,可以将n_jobs参数设置为-1,以利用所有可用的CPU核心进行并行计算,加快运行速度。 2. 提前定义参数字典:将参数字典定义在循环之外,避免在每次循环中重新定义参数。 3. 减少重复计算:在进行交叉验证和预测时,可以将最佳模型保存起来,避免重复计算。 4. 使用更高效的算法:可以考虑使用更高效的算法或模型来替代原有的模型,以提高性能和效率。 下面是优化后的代码示例: python from sklearn.model_selection import GridSearchCV, StratifiedKFold, cross_val_predict from sklearn.decomposition import PCA from sklearn.svm import SVC from sklearn.linear_model import LogisticRegression from sklearn.ensemble import StackingClassifier from sklearn.svm import LinearSVC import numpy as np # 定义参数字典 param_grid_svc = {'kernel': ['rbf'], 'C': [2 * x for x in cost], 'gamma': [2 * x for x in gam]} param_grid_lr = {'penalty': ['l1', 'l2'], "C": [0.00001, 0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000], "solver": ["newton-cg", "lbfgs", "liblinear", "sag", "saga"]} param_grid_stacking = {'final_estimator': [LogisticRegression(C=10 ** i) for i in range(-5, 4)]} best_pca_train_aucs = [] best_pca_test_aucs = [] best_pca_train_scores = [] best_pca_test_scores = [] train_aucs = [] test_aucs = [] train_scores = [] test_scores = [] pca_comp = [] for j in n_components: # PCA estimator = PCA(n_components=j, random_state=42) pca_X_train = estimator.fit_transform(X_standard) pca_X_test = estimator.transform(X_standard_test) # SVC模型训练 cvx = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) svc_grid_search = GridSearchCV(estimator=SVC(random_state=42), param_grid=param_grid_svc, cv=cvx, scoring=scoring, verbose=0) svc_grid_search.fit(pca_X_train, train_y) # Logistic Regression模型训练 LR_grid = LogisticRegression(max_iter=1000, random_state=42) LR_grid_search = GridSearchCV(LR_grid, param_grid=param_grid_lr, cv=cvx, scoring=scoring, n_jobs=-1, verbose=0) LR_grid_search.fit(pca_X_train, train_y) # Stacking模型训练 estimators = [ ('lr', LR_grid_search.best_estimator_), ('svc', svc_grid_search.best_estimator_), ] clf = StackingClassifier(estimators=estimators, final_estimator=LinearSVC(C=5, random_state=42), n_jobs=-1, verbose=0) clf.fit(pca_X_train, train_y) # Stacking模型参数搜索 estimators = [ ('lr', LR_grid_search.best_estimator_), ('svc', svc_grid_search.best_estimator_), ] Stacking_grid = StackingClassifier(estimators=estimators,) Stacking_grid_search = GridSearchCV(Stacking_grid, param_grid=param_grid_stacking, cv=cvx, scoring=scoring, n_jobs=-1, verbose=0) Stacking_grid_search.fit(pca_X_train, train_y) var = Stacking_grid_search.best_estimator_ # 计算AUC和分数 train_pre_y = cross_val_predict(Stacking_grid_search.best_estimator_, pca_X_train, train_y, cv=cvx) train_res1 = get_measures_gridloo(train_y, train_pre_y) test_pre_y = Stacking_grid_search.predict(pca_X_test) test_res1 = get_measures_gridloo(test_y, test_pre_y) # 保存结果 best_pca_train_aucs.append(train_res1.loc[:, "AUC"]) best_pca_test_aucs.append(test_res1.loc[:, "AUC"]) best_pca_train_scores.append(train_res1) best_pca_test_scores.append(test_res1) train_aucs.append(np.max(best_pca_train_aucs)) test_aucs.append(best_pca_test_aucs[np.argmax(best_pca_train_aucs)].item()) train_scores.append(best_pca_train_scores[np.argmax(best_pca_train_aucs)]) test_scores.append(best_pca_test_scores[np.argmax(best_pca_train_aucs)]) pca_comp.append(n_components[np.argmax(best_pca_train_aucs)]) print("n_components:") print(n_components[np.argmax(best_pca_train_aucs)]) 请注意,这只是对代码的一种优化方法,具体的改进策略可能因具体问题而异。

def calc_gradient_penalty(self, netD, real_data, fake_data): alpha = torch.rand(1, 1) alpha = alpha.expand(real_data.size()) alpha = alpha.cuda() interpolates = alpha * real_data + ((1 - alpha) * fake_data) interpolates = interpolates.cuda() interpolates = Variable(interpolates, requires_grad=True) disc_interpolates, s = netD.forward(interpolates) s = torch.autograd.Variable(torch.tensor(0.0), requires_grad=True).cuda() gradients1 = autograd.grad(outputs=disc_interpolates, inputs=interpolates, grad_outputs=torch.ones(disc_interpolates.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] gradients2 = autograd.grad(outputs=s, inputs=interpolates, grad_outputs=torch.ones(s.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] if gradients2 is None: return None gradient_penalty = (((gradients1.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) + \ (((gradients2.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) return gradient_penalty def get_loss(self, net,fakeB, realB): self.D_fake, x = net.forward(fakeB.detach()) self.D_fake = self.D_fake.mean() self.D_fake = (self.D_fake + x).mean() # Real self.D_real, x = net.forward(realB) self.D_real = (self.D_real+x).mean() # Combined loss self.loss_D = self.D_fake - self.D_real gradient_penalty = self.calc_gradient_penalty(net, realB.data, fakeB.data) return self.loss_D + gradient_penalty,return self.loss_D + gradient_penalty出现错误:TypeError: unsupported operand type(s) for +: 'Tensor' and 'NoneType'

这个错误是因为在 calc_gradient_penalty 函数中,如果 gradients2 是 None,则会返回 None。在 get_loss 函数中,如果 gradient_penalty 是 None,则返回的就是 self.loss_D + None,这导致了 TypeError 错误。 为了解决这个问题,你可以在 calc_gradient_penalty 函数中添加一个条件判断,如果 gradients2 是 None,则返回 0.0,而不是 None。这样,当计算 gradient_penalty 时,即使 gradients2 是 None,也不会导致错误。修改后的代码如下: def calc_gradient_penalty(self, netD, real_data, fake_data): alpha = torch.rand(1, 1) alpha = alpha.expand(real_data.size()) alpha = alpha.cuda() interpolates = alpha * real_data + ((1 - alpha) * fake_data) interpolates = interpolates.cuda() interpolates = Variable(interpolates, requires_grad=True) disc_interpolates, s = netD.forward(interpolates) s = torch.autograd.Variable(torch.tensor(0.0), requires_grad=True).cuda() gradients1 = autograd.grad(outputs=disc_interpolates, inputs=interpolates, grad_outputs=torch.ones(disc_interpolates.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] gradients2 = autograd.grad(outputs=s, inputs=interpolates, grad_outputs=torch.ones(s.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] if gradients2 is None: return 0.0 gradient_penalty = (((gradients1.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) + \ (((gradients2.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) return gradient_penalty def get_loss(self, net,fakeB, realB): self.D_fake, x = net.forward(fakeB.detach()) self.D_fake = self.D_fake.mean() self.D_fake = (self.D_fake + x).mean() # Real self.D_real, x = net.forward(realB) self.D_real = (self.D_real+x).mean() # Combined loss self.loss_D = self.D_fake - self.D_real gradient_penalty = self.calc_gradient_penalty(net, realB.data, fakeB.data) if gradient_penalty == None: gradient_penalty = 0.0 return self.loss_D + gradient_penalty
要使用网格搜索(GridSearchCV)调整上述代码的超参数,可以按照以下步骤进行: python from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV import numpy as np import pandas as pd # 读取数据 data = pd.read_csv('final_data1.csv') Y = data.y X = data.drop('y', axis=1) xmin = X.min(axis=0) xmax = X.max(axis=0) X_norm = (X - xmin) / (xmax - xmin) # 划分训练集和测试集 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X_norm, Y, test_size=0.2, random_state=42) # 定义超参数的候选值 param_grid = { 'C': [0.1, 1.0, 10.0], # 正则化强度的候选值 'penalty': ['l1', 'l2'], # 正则化类型的候选值 'solver': ['newton-cg', 'sag', 'saga', 'lbfgs'] # 求解器的候选值 } # 创建Logistic回归模型 model = LogisticRegression(random_state=0, multi_class='multinomial') # 使用网格搜索寻找最佳超参数组合 grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=5) grid_search.fit(X_train, y_train) # 输出最佳超参数组合 print("Best parameters: ", grid_search.best_params_) # 使用最佳超参数组合的模型进行预测 best_model = grid_search.best_estimator_ y_pred = best_model.predict(X_test) y_pred = np.round(y_pred) 在上述代码中,我们首先读取数据并进行归一化处理,然后划分训练集和测试集。接下来,我们定义了超参数的候选值(C、penalty和solver),创建了Logistic回归模型。然后,使用GridSearchCV进行网格搜索,寻找最佳的超参数组合。通过交叉验证(cv=5),它会尝试所有可能的超参数组合,并返回最佳组合。最后,我们使用最佳超参数组合的模型进行预测。 你可以根据自己的需求修改超参数的候选值,并根据实际情况选择合适的参数范围。
You can choose a classifier to use in the pipeline depending on your specific task and the nature of your data. Some commonly used classifiers for document classification include logistic regression, support vector machines (SVM), and naive Bayes. For example, if you want to use logistic regression as your classifier, you can replace the asterisks with LogisticRegression(random_state=0). The random_state parameter ensures that the results are reproducible. The complete code would look like this: from sklearn.linear_model import LogisticRegression X_train = df.loc[:25000, 'review'].values y_train = df.loc[:25000, 'sentiment'].values X_test = df.loc[25000:, 'review'].values y_test = df.loc[25000:, 'sentiment'].values from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import GridSearchCV tfidf = TfidfVectorizer(strip_accents=None, lowercase=False, preprocessor=None) param_grid = [{'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer, tokenizer_porter], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0, 100.0]}, {'vect__ngram_range': [(1, 1)], 'vect__stop_words': [stop, None], 'vect__tokenizer': [tokenizer, tokenizer_porter], 'vect__use_idf':[False], 'vect__norm':[None], 'clf__penalty': ['l1', 'l2'], 'clf__C': [1.0, 10.0, 100.0]}, ] lr_tfidf = Pipeline([('vect', tfidf), ('clf', LogisticRegression(random_state=0))]) gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid, scoring='accuracy', cv=5, verbose=2, n_jobs=-1)
非常抱歉,您是对的,这是一个错误。normalize参数应该是一个布尔型参数,用于指定是否对数据进行标准化处理。因此,可以将normalize参数设置为True或False。下面是修改后的程序: import pandas as pd import numpy as np from sklearn.linear_model import Ridge from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import PolynomialFeatures from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_regression from sklearn.model_selection import train_test_split # 读取 Excel 文件 data = pd.read_excel('D://数据1.xlsx', sheet_name='000') # 把数据分成输入和输出 X = data.iloc[:, 0:4].values y = data.iloc[:, 0:4].values # 标准化处理 scaler = StandardScaler() X = scaler.fit_transform(X) # 添加多项式特征 poly = PolynomialFeatures(degree=2, include_bias=False) X = poly.fit_transform(X) # 将数据分为训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0) # 创建岭回归模型, 加入L2正则化 model = Ridge(alpha=0.88, solver='auto', normalize=False, max_iter=1000, tol=0.001, random_state=None, penalty='l2', # 加入L2正则化 fit_intercept=True) # 拟合模型 model.fit(X_train, y_train) # 使用模型进行预测 y_pred = model.predict(X_test) # 将预测结果四舍五入取整 y_pred = np.round(y_pred) # 去除重复行 y_pred = np.unique(y_pred, axis=0) # 打印预测结果 print(y_pred) 非常感谢您的指正。
好的,您的问题是如何在模型中分别加入'l1'和'l2'正则项,观察决策边界的变化情况,以及训练集和测试集分数,体会两种正则项对模型的作用。 可以通过LogisticRegression模型的参数penalty来设置正则项类型,'l1'代表L1正则化,'l2'代表L2正则化。我们可以分别创建两个LogisticRegression模型,一个使用L1正则化,另一个使用L2正则化。具体代码如下: python # 使用L1正则化训练LogisticRegression模型 clf_l1 = LogisticRegression(penalty='l1') clf_l1.fit(X_poly, y) score_l1 = clf_l1.score(X_poly, y) # 使用L2正则化训练LogisticRegression模型 clf_l2 = LogisticRegression(penalty='l2') clf_l2.fit(X_poly, y) score_l2 = clf_l2.score(X_poly, y) 我们还可以绘制两种正则化的决策边界,以及训练集和测试集的分数。具体代码如下: python # 绘制L1正则化的决策边界 Z_l1 = clf_l1.predict(poly.transform(np.c_[xx.ravel(), yy.ravel()])) Z_l1 = Z_l1.reshape(xx.shape) plt.contourf(xx, yy, Z_l1, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.Spectral) plt.title('L1 regularization (Train Score: {:.2f})'.format(score_l1)) plt.show() # 绘制L2正则化的决策边界 Z_l2 = clf_l2.predict(poly.transform(np.c_[xx.ravel(), yy.ravel()])) Z_l2 = Z_l2.reshape(xx.shape) plt.contourf(xx, yy, Z_l2, cmap=plt.cm.Spectral, alpha=0.8) plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.Spectral) plt.title('L2 regularization (Train Score: {:.2f})'.format(score_l2)) plt.show() 通过观察决策边界及训练集和测试集分数可以发现,L1正则化可以使得模型更加稀疏,即某些特征系数会被压缩到0,从而减少模型的复杂度,避免过拟合;而L2正则化可以使得模型的系数更加平滑,避免出现过大的系数,同样也可以避免过拟合。

const ServerParam & SP = ServerParam::i(); const int self_min = wm.interceptTable()->selfReachCycle(); const int mate_min = wm.interceptTable()->teammateReachCycle(); int opp_min = wm.interceptTable()->opponentReachCycle(); const PlayerObject * opp_fastest = wm.interceptTable()->fastestOpponent(); if ( opp_fastest && opp_fastest->goalie() && wm.gameMode().isPenaltyKickMode() && opp_fastest->pos().dist( wm.ball().pos() ) >= 3.0 ) // MAGIC NUMBER { M_tackle_situation = false; M_opponent_ball = false; dlog.addText( Logger::TEAM, __FILE__":(update) penalty shootouts. not a tackle situation" ); return; } if ( opp_fastest && wm.gameMode().isPenaltyKickMode() && ! opp_fastest->goalie() ) { const AbstractPlayerObject * opponent_goalie = wm.getTheirGoalie(); if ( opponent_goalie ) { /* //yz del std::map< const AbstractPlayerObject*, int >::const_iterator player_map_it = wm.interceptTable()->playerMap().find( opponent_goalie ); if ( player_map_it != wm.interceptTable()->playerMap().end() ) { // considering only opponent goalie in penalty-kick mode opp_min = player_map_it->second; dlog.addText( Logger::TEAM, __FILE__":(update) replaced min_opp with goalie's reach cycle (%d).", opp_min ); } else { opp_min = 1000000; // practically canceling the fastest non-goalie opponent player dlog.addText( Logger::TEAM, __FILE__":%d: (update) set opp_min as 1000000 so as not to consider the fastest opponent.", __LINE__ ); } */ } else { opp_min = 1000000; // practically canceling the fastest non-goalie opponent player dlog.addText( Logger::TEAM, __FILE__":%d (update) set opp_min as 1000000 so as not to consider the fastest opponent.", __LINE__); } }

class PPO(object): def __init__(self): self.sess = tf.Session() self.tfs = tf.placeholder(tf.float32, [None, S_DIM], 'state') # critic with tf.variable_scope('critic'): l1 = tf.layers.dense(self.tfs, 100, tf.nn.relu) self.v = tf.layers.dense(l1, 1) self.tfdc_r = tf.placeholder(tf.float32, [None, 1], 'discounted_r') self.advantage = self.tfdc_r - self.v self.closs = tf.reduce_mean(tf.square(self.advantage)) self.ctrain_op = tf.train.AdamOptimizer(C_LR).minimize(self.closs) # actor pi, pi_params = self._build_anet('pi', trainable=True) oldpi, oldpi_params = self._build_anet('oldpi', trainable=False) with tf.variable_scope('sample_action'): self.sample_op = tf.squeeze(pi.sample(1), axis=0) # choosing action with tf.variable_scope('update_oldpi'): self.update_oldpi_op = [oldp.assign(p) for p, oldp in zip(pi_params, oldpi_params)] self.tfa = tf.placeholder(tf.float32, [None, A_DIM], 'action') self.tfadv = tf.placeholder(tf.float32, [None, 1], 'advantage') with tf.variable_scope('loss'): with tf.variable_scope('surrogate'): # ratio = tf.exp(pi.log_prob(self.tfa) - oldpi.log_prob(self.tfa)) ratio = pi.prob(self.tfa) / (oldpi.prob(self.tfa) + 1e-5) surr = ratio * self.tfadv if METHOD['name'] == 'kl_pen': self.tflam = tf.placeholder(tf.float32, None, 'lambda') kl = tf.distributions.kl_divergence(oldpi, pi) self.kl_mean = tf.reduce_mean(kl) self.aloss = -(tf.reduce_mean(surr - self.tflam * kl)) else: # clipping method, find this is better self.aloss = -tf.reduce_mean(tf.minimum( surr, tf.clip_by_value(ratio, 1.-METHOD['epsilon'], 1.+METHOD['epsilon'])*self.tfadv))

最新推荐

0690、断线检测式报警电路.rar

0689、短路检测式报警电路.rar

全国34个省份2000-2021高技术产业投资-施工项目数.xlsx

数据年度2000-2021 数据范围:全国34个省份,含港澳台 数据年度:2000-2021,22个年度的数据 excel数据文件包原始数据(由于多年度指标不同存在缺失值)、线性插值、ARIMA填补三个版本,提供您参考使用。 其中,ARIMA回归填补无缺失值。 填补说明: 线性插值。利用数据的线性趋势,对各年份中间的缺失部分进行填充,得到线性插值版数据,这也是学者最常用的插值方式。 ARIMA回归填补。基于ARIMA模型,利用同一地区的时间序列数据,对缺失值进行预测填补。

基于STM32单片机的DHT11温湿度模块的使用

使用方法 工程采用Keil MDK 5编写,基于STM32标准库 工程项目文件在 Project 文件夹内的 工程模板.uvprojx,双击即可打开。 可以复制 App文件夹下的 DHT11.c 和 DHT11.h文件到自己的项目中使用。 程序运行时不需要初始化外设,具体的初始化过程在以下函数内部调用了,我们只需要关注下面函数的用法即可。 函数说明 uint8_t DHT_Get_Temp_Humi_Data(uint8_t buffer[]) 使用此函数需要传入一个8位的的数组。分别用来存储 湿度整数部分、湿度小数部分、温度整数部分、温度小数部分、校验和,注意!湿度小数部分接收到的值始终为0。 函数有一个返回值,接收到正确数据返回1,错误返回0,建议在调用时先判断一下该返回值再进行其他操作。 只需要在自己的函数中重复调用即可,示例中是将该函数在while函数中每两秒重复调用,然后打印在OLED显示屏上。 其它 工程文件中包含了常见的0.96"、1.3"的OLED显示屏的驱动,驱动芯片为SSD1306,通过SPI方式连接到STM32,具体的引脚连接翻看oled.h文件中

chromedriver-linux64.zip

122版本全平台chrome和chromedriver离线安装包,详细版本号:122.0.6261.69

ssm数据结构课程网络学习平台的+vue毕业论文.doc

ssm数据结构课程网络学习平台的+vue毕业论文.doc

管理建模和仿真的文件

管理Boualem Benatallah引用此版本:布阿利姆·贝纳塔拉。管理建模和仿真。约瑟夫-傅立叶大学-格勒诺布尔第一大学,1996年。法语。NNT:电话:00345357HAL ID:电话:00345357https://theses.hal.science/tel-003453572008年12月9日提交HAL是一个多学科的开放存取档案馆,用于存放和传播科学研究论文,无论它们是否被公开。论文可以来自法国或国外的教学和研究机构,也可以来自公共或私人研究中心。L’archive ouverte pluridisciplinaire

Objective-C中的协议与代理设计模式介绍

# 1. 理解Objective-C中的协议 协议是Objective-C中一种很重要的特性,它类似于其他编程语言中的接口,用来定义对象之间的通信规则和行为约定。在Objective-C中,协议可以帮助实现对象之间的松耦合和重用性。让我们深入了解Objective-C中的协议。 ## 1.1 什么是Objective-C中的协议? 在Objective-C中,协议是一组方法的声明,定义了一个对象可以遵循的接口。协议本身并不实现任何方法,而是规定遵循该协议的类需要实现协议中定义的方法。通过协议,可以在不同的类之间定义一组公共的行为和方法。 ## 1.2 协议的声明和使用方法 在Ob

编写求n!的函数f(int n),在主函数中输入两个整数m、n,然后调用函数求: C=m!/n!(m -n)!

好的,我可以编写这个函数。以下是代码示例: ```c++ #include <iostream> using namespace std; int f(int n) { if (n <= 1) { return 1; } else { return n * f(n - 1); } } int main() { int m, n; cout << "请输入两个整数m和n,以空格分隔:" << endl; cin >> m >> n; int C = f(m) / (f(n) * f(m - n));

视频中运动目标跟踪算法的研究.doc

视频中运动目标跟踪算法的研究.doc

"互动学习:行动中的多样性与论文攻读经历"

多样性她- 事实上SCI NCES你的时间表ECOLEDO C Tora SC和NCESPOUR l’Ingén学习互动,互动学习以行动为中心的强化学习学会互动,互动学习,以行动为中心的强化学习计算机科学博士论文于2021年9月28日在Villeneuve d'Asq公开支持马修·瑟林评审团主席法布里斯·勒菲弗尔阿维尼翁大学教授论文指导奥利维尔·皮耶昆谷歌研究教授:智囊团论文联合主任菲利普·普雷教授,大学。里尔/CRISTAL/因里亚报告员奥利维耶·西格德索邦大学报告员卢多维奇·德诺耶教授,Facebook /索邦大学审查员越南圣迈IMT Atlantic高级讲师邀请弗洛里安·斯特鲁布博士,Deepmind对于那些及时看到自己错误的人...3谢谢你首先,我要感谢我的两位博士生导师Olivier和Philippe。奥利维尔,"站在巨人的肩膀上"这句话对你来说完全有意义了。从科学上讲,你知道在这篇论文的(许多)错误中,你是我可以依