Fine tuning a text 2 image model using LORA

export MODEL_NAME=“runwayml/stable-diffusion-v1-5”
export DATASET_NAME=“lambdalabs/pokemon-blip-captions”
export OUTPUT_DIR=“./models/lora/pokemon”

sudo accelerate launch --mixed_precision=“bf16” train_text_to_image_lora.py
–pretrained_model_name_or_path=$MODEL_NAME
–dataset_name=DATASET_NAME \ --dataloader_num_workers=8 \ --resolution=256 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --max_train_steps=15000 \ --learning_rate=1e-04 \ --max_grad_norm=1 \ --lr_scheduler="cosine" --lr_warmup_steps=0 \ --output_dir={OUTPUT_DIR}
–checkpointing_steps=500 \

so this is the code i used to run the fine tuning which as mentioned in Low-Rank Adaptation of Large Language Models (LoRA)

the error i get is

File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can’t pickle local object ‘main..preprocess_train’
wandb: Waiting for W&B process to finish… (failed 1). Press Control-C to abort syncing.
wandb: :rocket: View run twilight-sound-18 at: Weights & Biases
configeration file :
/.cache/huggingface/accelerate/default_config.yaml
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_CPU
downcast_bf16: ‘no’
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: true
system is MACbook pro m1 . any input is appreciated .