Error in `[.data.frame`(thedat, , y_name, with = F) : unused argument (with = F)
时间: 2023-05-24 19:01:22 浏览: 199
这个错误可能是因为在对数据框进行子集选取时使用了错误的参数 "with = F",正确的参数应该是 "drop = FALSE"。您可以尝试使用以下代码来解决这个问题:
thedat[, y_name, drop = FALSE]
相关问题
优化代码 def cluster_format(self, start_time, end_time, save_on=True, data_clean=False, data_name=None): """ local format function is to format data from beihang. :param start_time: :param end_time: :return: """ # 户用簇级数据清洗 if data_clean: unused_index_col = [i for i in self.df.columns if 'Unnamed' in i] self.df.drop(columns=unused_index_col, inplace=True) self.df.drop_duplicates(inplace=True, ignore_index=True) self.df.reset_index(drop=True, inplace=True) dupli_header_lines = np.where(self.df['sendtime'] == 'sendtime')[0] self.df.drop(index=dupli_header_lines, inplace=True) self.df = self.df.apply(pd.to_numeric, errors='ignore') self.df['sendtime'] = pd.to_datetime(self.df['sendtime']) self.df.sort_values(by='sendtime', inplace=True, ignore_index=True) self.df.to_csv(data_name, index=False) # 调用基本格式化处理 self.df = super().format(start_time, end_time) module_number_register = np.unique(self.df['bat_module_num']) # if registered m_num is 0 and not changed, there is no module data if not np.any(module_number_register): logger.logger.warning("No module data!") sys.exit() if 'bat_module_voltage_00' in self.df.columns: volt_ref = 'bat_module_voltage_00' elif 'bat_module_voltage_01' in self.df.columns: volt_ref = 'bat_module_voltage_01' elif 'bat_module_voltage_02' in self.df.columns: volt_ref = 'bat_module_voltage_02' else: logger.logger.warning("No module data!") sys.exit() self.df.dropna(axis=0, subset=[volt_ref], inplace=True) self.df.reset_index(drop=True, inplace=True) self.headers = list(self.df.columns) # time duration of a cluster self.length = len(self.df) if self.length == 0: logger.logger.warning("After cluster data clean, no effective data!") raise ValueError("No effective data after cluster data clean.") self.cluster_stats(save_on) for m in range(self.mod_num): print(self.clusterid, self.mod_num) self.module_list.append(np.unique(self.df[f'bat_module_sn_{str(m).zfill(2)}'].dropna())[0])
Here are some possible optimizations for the given code:
1. Instead of using a list comprehension to find columns with 'Unnamed' in their names, you can use the `filter()` function along with a lambda function to achieve the same result in a more concise way:
```
unused_index_col = list(filter(lambda x: 'Unnamed' in x, self.df.columns))
```
2. Instead of dropping duplicates and resetting the index separately, you can use the `drop_duplicates()` function with the `ignore_index` parameter set to `True` to achieve both in one step:
```
self.df.drop_duplicates(inplace=True, ignore_index=True)
```
3. Instead of using `sys.exit()` to terminate the program when there is no module data, you can raise a `ValueError` with an appropriate error message:
```
raise ValueError("No module data!")
```
4. Instead of using a series of `if` statements to find the voltage reference column, you can use the `loc` accessor with a boolean mask to select the first column that starts with 'bat_module_voltage':
```
volt_ref_col = self.df.columns[self.df.columns.str.startswith('bat_module_voltage')][0]
```
5. Instead of using a loop to append a single item to a list, you can use the `append()` method directly:
```
self.module_list.append(np.unique(self.df[f'bat_module_sn_{str(m).zfill(2)}'].dropna())[0])
```
By applying these optimizations, the code can become more concise and efficient.
解释一下冒号后面的代码: if allow_nograd: # Compute relevant gradients diff_params = [p for p in self.module.parameters() if p.requires_grad] grad_params = grad(loss, diff_params, retain_graph=second_order, create_graph=second_order, allow_unused=allow_unused) gradients = [] grad_counter = 0 # Handles gradients for non-differentiable parameters for param in self.module.parameters(): if param.requires_grad: gradient = grad_params[grad_counter] grad_counter += 1 else: gradient = None gradients.append(gradient) else: try: gradients = grad(loss, self.module.parameters(), retain_graph=second_order, create_graph=second_order, allow_unused=allow_unused) except RuntimeError: traceback.print_exc() print('learn2learn: Maybe try with allow_nograd=True and/or allow_unused=True ?')
这段代码是一个 Pytorch 模型中计算梯度的过程。在训练模型时,我们需要通过反向传播算法计算损失函数对模型参数的梯度,以便进行参数更新。这段代码中的 if-else 语句用于处理模型中存在不可微参数的情况。
首先,如果 allow_nograd 参数为 True,那么就会计算可微参数的梯度。具体来说,diff_params 是指那些设置了 requires_grad=True 的参数,grad_params 是指计算出的梯度值。在处理不可微参数时,由于这些参数不参与梯度计算,因此将其对应的梯度设置为 None。
如果 allow_nograd 参数为 False,那么就会尝试计算所有参数的梯度。但是,如果模型中存在不可微参数,那么就会抛出 RuntimeError 异常。此时代码会打印出异常信息,并建议将 allow_nograd 设置为 True 或 allow_unused 设置为 True,以忽略不可微参数。
此段代码的作用在于计算模型参数的梯度,并将其用于参数更新。
阅读全文