Incoporating databricks notebooks into nbdev

I’m working within databricks as a part of my company’s policy, but want to try to use nbdev as a platform. Is there someway to convert cells in databricks that look like this:

# Databricks notebook source
# MAGIC %run ./machine_learning_utils

# COMMAND ----------



# COMMAND ----------

column_names =['pmid', 'journal', 'title', 'abstract', 'keywords', 
               'label','pub_type','authors','date1','doi','date2',
               'label_category']
text_columns = ['title', 'abstract']
categories = ['Case Report', 'Diagnosis', 'Epidemic Forecasting', 
              'General Info', 'Mechanism', 'Prevention', 
              'Transmission', 'Treatment']
ds = load_dataset('csv', skiprows=1,
                  column_names=column_names,
                  data_files={'train': 'train.csv',
                              'valid': 'valid.csv',
                              'test': 'test.csv'})

# COMMAND ----------

i.e. cells separated by # COMMAND ---------- lines into standard jupyter notebook cells that can be processed easily by nbdev?

3 Likes

Looking for support NBDEV with databricks notebooks too!