Untar_data, unhashable type: 'dict'

Untar_data is giving a TypeError: unhashable type: ‘dict’. Untar was errorless when I was working a few days ago. Any ideas on how to resolve this?

Code:
from fastai.vision.all import *
path = untar_data(URLs.PETS)

Here is the error:


TypeError Traceback (most recent call last)
in
1 from fastai.vision.all import *
----> 2 path = untar_data(URLs.PETS)

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/data/external.py in untar_data(url, archive, data, c_key, force_download)
121 def untar_data(url, archive=None, data=None, c_key=‘data’, force_download=False):#, extract_func=file_extract, timeout=4):
122 "Download url to fname if dest doesn’t exist, and extract to folder dest"
→ 123 d = FastDownload(fastai_cfg(), module=fastai.data, archive=archive, data=data, base=’~/.fastai’)
124 return d.get(url, force=force_download, extract_key=c_key)

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/data/external.py in fastai_cfg()
13 def fastai_cfg():
14 "Config object for fastai’s config.ini"
—> 15 return Config(Path(os.getenv(‘FASTAI_HOME’, ‘~/.fastai’)), ‘config.ini’, create=dict(
16 data = ‘data’, archive = ‘archive’, storage = ‘tmp’, model = ‘models’))
17

TypeError: unhashable type: 'dict’

2 Likes

Hi Samsk

It works for me on colab

Regards Conwyn

path=untar_data(URLs.PETS)/‘images’
100.00% [811712512/811706944 00:22<00:00]

!ls -al /root/.fastai/data/oxford-iiit-pet/images
Streaming output truncated to the last 5000 lines.
-rwxr-xr-x 1 1000 1000 27987 Jun 18 2012 Egyptian_Mau_91.jpg
-rwxr-xr-x 1 1000 1000 31662 Jun 18 2012 Egyptian_Mau_92.jpg
-rwxr-xr-x 1 1000 1000 11833 Jun 18 2012 Egyptian_Mau_93.jpg
-rwxr-xr-x 1 1000 1000 13282 Jun 18 2012 Egyptian_Mau_94.jpg
-rwxr-xr-x 1 1000 1000 25425 Jun 18 2012 Egyptian_Mau_95.jpg
-rwxr-xr-x 1 1000 1000 22833 Jun 18 2012 Egyptian_Mau_96.jpg

1 Like

Hi Conwyn,
Thank you for your response. Unfortunately I am still getting the same error, but I am using Gradient, so I wonder if that could be part of the disagreement.

Sam

Hi Samsk, also noticed the same error on Gradient, switched to Colab and it works fine.

I get the same error for notebooks I have ran previously too. Have you found a way around it for Gradient?

1 Like

Same error here.

Any thoughts from @dkobran @tomg ?

I’ve had no luck.

Hi there – I believe this is because fastai has switched to using config.ini instead of the old config.yml. We’re publishing a new container that accounts for this change. However, for existing notebooks you’ll need to modify /root/.fastai/config.ini to look like this:

[PATH]
archive = /storage/archive/
model = /storage/models/
data = /storage/data/

cc @BalogunofAfrica @Samsk @Conwyn

Please let me know if something isn’t right. Cheers!

1 Like

thank you so much for this. I’m trying to use bash to insert this into a config.ini file in the /.fastai/ folder with what you put here. will post what works or does not likely tomorrow. cheers and thank you a thousand times.

so, I wrote this line to create the .ini file, but I got the same error.

I’m guessing that was not what you meant when you said:
‘you’ll need to modify /root/.fastai/config.ini to look like this:’

also, ~/.fastai/ only had config.yml when I started

was that what you meant? @tomg
Screen Shot 2021-08-20 at 3.21.01 PM

Hi – that is what I meant. The config.yml is the old config method of fastai. Have you pulled the latest updated down from Jeremy? I’m curious to see what error you’d get with that config.ini and the latest fastai. I’ll need to dig in deeper if you’re still having issues.

ooh, actually no. Thanks for getting back.

Figured out a solution with out switching to CoLab:

  1. save your notebook to your computer

  2. Open up paperspce and shutdown your current GPU instance

  3. wait a few minutes as it shuts down

  4. Click "create under the “notebooks” tab on the first page

  5. Restart the FAstAI GPU thing like you did before but on the new one

  6. load your old .ipynb from your computer to the new notebook

  7. Run it.

Mine worked perfectly

Hi Tom, adding a config.ini didn’t help. I had to make a new notebook. However, in the new .fastai/ dir, the only file there was config.yml and the code worked. So it’s something different than the .ini/.yml files that broke.

I’m getting this error when calling path = untar_data(URLs.CAMVID) on my local machine, using one of Zach Mueller’s WWF lessons that runs fine for me on Colab.
I’ve got the latest fastai and wwf installed locally.


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-6-76478f47c748> in <module>
----> 1 path = untar_data(URLs.CAMVID)

~/envs/fastai/lib/python3.8/site-packages/fastai/data/external.py in untar_data(url, archive, data, c_key, force_download)
    121 def untar_data(url, archive=None, data=None, c_key='data', force_download=False):#, extract_func=file_extract, timeout=4):
    122     "Download `url` to `fname` if `dest` doesn't exist, and extract to folder `dest`"
--> 123     d = FastDownload(fastai_cfg(), module=fastai.data, archive=archive, data=data, base='~/.fastai')
    124     return d.get(url, force=force_download, extract_key=c_key)

~/envs/fastai/lib/python3.8/site-packages/fastai/data/external.py in fastai_cfg()
     13 def fastai_cfg():
     14     "`Config` object for fastai's `config.ini`"
---> 15     return Config(Path(os.getenv('FASTAI_HOME', '~/.fastai')), 'config.ini', create=dict(
     16         data = 'data', archive = 'archive', storage = 'tmp', model = 'models'))
     17 

TypeError: unhashable type: 'dict'

So… you all are saying…what exactly? Nothing is in /root on my laptop except stuff that really needs to be root.

What should ~/.fastai/config.ini look like for non-Colab applications? I tried copy-pasting @tomg’s suggestion but the error persists.

Version info:

  • fastai : 2.5.2
  • fastcore : 1.3.19
  • wwf : 0.0.16

UPDATE: So originally (i.e. earlier today) there was only ~/.fastai/config.yml and no config.ini file. Then I tried copying tomg’s code. Then what I did was do a cp config.{yml,ini} and then edited the syntax of the new ini file to match what it looked like it should. Here’s what I have now, which still doesn’t help, i.e. I still get the same error (even after restarting the kernel):

$ cat config.yml
archive_path: /home/shawley/.fastai/archive
data_path: /home/shawley/.fastai/data
model_path: /home/shawley/.fastai/models
storage_path: /tmp
version: 2

$ cat config.ini
[PATH]
archive = /home/shawley/.fastai/archive
model = /home/shawley/.fastai/models
data = /home/shawley/.fastai/data
storage = /tmp
version = 2

?

SOLUTION: (for me at least) Restart Jupyter. Not just the kernel, all of Jupyter. :man_shrugging: ::partying_face:

@drscotthawley, I’ve been facing the same problem as you, and nothing from this thread help me out.

What I’m doing is downloading/decompressing the data separatelly from untar_data, and setting the path like this:

print(URLs.CAMVID)
https://s3.amazonaws.com/fast-ai-imagelocal/camvid.tgz

wget https://s3.amazonaws.com/fast-ai-imagelocal/camvid.tgz -P ~/.fastai/archive/

tar -xzf ~/.fastai/archive/camvid.tgz -C ~/.fastai/data/

path = Path(os.getenv(‘FASTAI_HOME’, ‘~/.fastai/data/camvid’))

With this path in place, I managed to run other cels from fast_pages.

I hope it helps!

2 Likes

Nothing in this thread has worked for me :frowning: Someone please share if you resolve this. I tried all the variations of the config.yml and config.ini files, rebooting jupyter ect.

I am running locally, not on collab or gradient. Is this not supported anymore?

The first line above is incorrect, it should be [DEFAULT], i.e:

[DEFAULT]
archive = /storage/archive/
model = /storage/models/
data = /storage/data/
2 Likes

Thank you,
This worked for me :relieved:

For the newbies like me :smiley: this is what I did

Open terminal (make sure you’ve pressed to start in Jupyter)

cd ~/.fastai/
cat > config.ini

Then enter

[DEFAULT]
archive = /storage/archive/
model = /storage/models/
data = /storage/data/

And save with Ctrl + D

Restart kernel and it should work

1 Like

Thanks for this :slight_smile: I’ve been having the same issues on Amazon EC2. I tried the other recommendations in this thread with no luck. This approach of getting the files myself and hardcoding the path in the relevant cell(s) has worked for me.