Need advice for improving models with small training dataset

My training dataset consists of < 900 row in a csv and I’m using anywhere between 50-150 features (some engineered) in modeling .

What can/should be done to improve model accuracy with such a small dataset? Any ideas, especially from those experienced with using small datasets, would be appreciated.

I’ve been thinking about this recently. I haven’t tried out my google fu, so I apologize, but have you tried duplicating your dataset 10-15x and running it through? Similar to how we deal with imbalanced data sets.

Also - I’m going to try working on using learned embeddings from a large dataset on common fields in a smaller one this weekend. If anyone has tried this - what was your experience?