Getting an error when trying to run search_images_ddg in google colab

When I try to use search_images_ddg on google colab im getting this error:

---------------------------------------------------------------------------
TimeoutError                              Traceback (most recent call last)
/usr/lib/python3.11/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
   1347             try:
-> 1348                 h.request(req.get_method(), req.selector, req.data, headers,
   1349                           encode_chunked=req.has_header('Transfer-encoding'))

17 frames
TimeoutError: [Errno 110] Connection timed out

During handling of the above exception, another exception occurred:

URLError                                  Traceback (most recent call last)
/usr/lib/python3.11/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
   1349                           encode_chunked=req.has_header('Transfer-encoding'))
   1350             except OSError as err: # timeout error
-> 1351                 raise URLError(err)
   1352             r = h.getresponse()
   1353         except:

URLError: <urlopen error [Errno 110] Connection timed out>
1 Like

The library has changed it seems I used this instead:

from ddgs import DDGS
def search_images(keywords, max_images = 200):
results = DDGS().images(keywords, max_results = max_images)
return L(r[ā€œimageā€] for r in results if ā€œimageā€ in r)

link to my notebook:

2 Likes

The error means your code is timing out while trying to reach DuckDuckGo servers—likely due to network issues, restrictions, or temporary service problems. Try restarting your Colab runtime, checking your internet, or switching to a different library like bing-image-downloader. Using a proxy or adding retry logic can also help. Want a quick workaround? I can help rework your code.

1 Like

I have been fighting this issue myself. It seems like most duck-duck-go solutions will fail from Colab because they filter traffic and Colab reuses IPs or something. I’ve been trying with Openverse but things are not easy either (or limited later, such as getting good URLs from Flicker which then blocks downloads).

For something more productive I’d consider Bing or similar (involves registering on Azure). Otherwise I kind of gave up and decided to use Imageye to download mid-sized images semi-manually: you do image search (DDG or whatever) and scroll until you see enough of them. Then Imageye downloads them in batch, saving quite a lot of time, but resulting in aprox 500x500 images.

Really interested in real solutions for this basic problem that work in 2025.

Could we see your working code block please? Thank you! :slight_smile:

def search_images(keywords, max_images=10, color='color'):
    results = DDGS().images(
        query=keywords,
        max_results=max_images,
        color=color,
    )
    return L(results).itemgot('image')

Solve it using this: you need to pass the color=ā€˜color’ parameter to ddgs.