Why not? - You start with zero bias and do a run. Then, you tweak the bias slightly and see if the result improves. If it does, tweak it again. Stop when it no longer improves the run. It is without saying that a precondition to this exercise is a functioning model without bugs.
Yes. But there are many things to try to improve the models, I’d normally stack some layers, change the number of input, output. But I don’t have yet the sense of when to change the bias. I could try this the next time but having some intuition might help to try things naturally
hi, everyone, when I run lr = find_optimal_lr(learn)
, error as follows:
I don’t know how to correct
P.S. version ‘1.0.61’
lr = find_optimal_lr(learn)
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
No artists with labels found to put in legend. Note that artists whose label start with an underscore are ignored when legend() is called with no argument.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [62], in <cell line: 1>()
----> 1 lr = find_optimal_lr(learn)
Input In [61], in find_optimal_lr(learner, noise, show_df, show_min_values)
110 rev_tru_idx[rev_tru_idx.idxmax()] = 0
111 # pdb.set_trace()
--> 112 optimun_lr_lower_bound_1_g = df.lrs.iloc[rev_tru_idx.idxmax()]
113 rev_tru_idx[rev_tru_idx.idxmax()] = 0
114 optimun_lr_lower_bound_2_g = df.lrs.iloc[rev_tru_idx.idxmax()]
File ~/miniconda3/envs/fastai/lib/python3.9/site-packages/pandas/core/series.py:2404, in Series.idxmax(self, axis, skipna, *args, **kwargs)
2339 def idxmax(self, axis=0, skipna=True, *args, **kwargs):
2340 """
2341 Return the row label of the maximum value.
2342
(...)
2402 nan
2403 """
-> 2404 i = self.argmax(axis, skipna, *args, **kwargs)
2405 if i == -1:
2406 return np.nan
File ~/miniconda3/envs/fastai/lib/python3.9/site-packages/pandas/core/base.py:657, in IndexOpsMixin.argmax(self, axis, skipna, *args, **kwargs)
653 return delegate.argmax()
654 else:
655 # error: Incompatible return value type (got "Union[int, ndarray]", expected
656 # "int")
--> 657 return nanops.nanargmax( # type: ignore[return-value]
658 delegate, skipna=skipna
659 )
File ~/miniconda3/envs/fastai/lib/python3.9/site-packages/pandas/core/nanops.py:88, in disallow.__call__.<locals>._f(*args, **kwargs)
86 if any(self.check(obj) for obj in obj_iter):
87 f_name = f.__name__.replace("nan", "")
---> 88 raise TypeError(
89 f"reduction operation '{f_name}' not allowed for this dtype"
90 )
91 try:
92 with np.errstate(invalid="ignore"):
TypeError: reduction operation 'argmax' not allowed for this dtype
I believe you are thinking of AI as is procedural undertaking. AI is more of an art than a science, You try everything. If something doesn’t work, you try something else
Thanks for sharing your opinion
modify as follows: rev_tru_idx[rev_tru_idx.idxmax()] = np.Nan
to rev_tru_idx[rev_tru_idx.idxmax()] = False
Input In [61], in find_optimal_lr(learner, noise, show_df, show_min_values)
110 rev_tru_idx[rev_tru_idx.idxmax()] = False
111 # pdb.set_trace()
--> 112 optimun_lr_lower_bound_1_g = df.lrs.iloc[rev_tru_idx.idxmax()]
113 rev_tru_idx[rev_tru_idx.idxmax()] = False
114 optimun_lr_lower_bound_2_g = df.lrs.iloc[rev_tru_idx.idxmax()]
The main intuition I can give you about bias is this. Bias is most helpful when you have, or believe you have, unbalanced data, which was the case on the model you are referring to. Bias can redress the imbalance and improve the model results.
Great! That’s exactly what I am looking for. Thanks