Getting The Bing Image Search key

i was able to get this to work when i first attempted in september, using the URLs that were given when signing up and configuring the azure bing search resource. the ‘search_images_bing()’ function worked flawlessly.
but when i returned to go over the material the other day, it did not work. no matter what i tried. i even signed up for a new account, got new keys, everything.
after spending a fair amount of time trying to tease apart how that search function integrates to the azure cognitive service python module, i came away with the feeling that something had changed - and you may notice that microsoft made some big change around bing search on 10/30/2020 - but i was really unable to relate the info in those notices to the error messages i was getting (‘resource not found’). i think the endpoint urls were simply improper, and the microsoft client sdk was having trouble.
finally, i was able to get bing image search to work, but i had to code my own using the the technique shown here:

1 Like

It does seem like the old method of retrieving the API keys from https://azure.microsoft.com/en-us/try/cognitive-services/?api=bing-image-search-api is no longer working.

I used the following code as described on the Microsoft quickstart guide: https://docs.microsoft.com/en-us/bing/search-apis/bing-image-search/quickstarts/rest/python

subscription_key = "XXX"
search_url = "https://api.bing.microsoft.com/v7.0/images/search"
headers = {"Ocp-Apim-Subscription-Key" : subscription_key}

search_term = "grizzly"

params  = {"q": search_term, "license": "public", "imageType": "photo"}

response = requests.get(search_url, headers=headers, params=params)
response.raise_for_status()
search_results = response.json()

img_urls = [img['contentUrl'] for img in search_results["value"]]

When checking the length of img_urls, it seems like it is now 35 instead of the earlier 150. Does anyone have an idea how to increase the number of image URLs that can be retrieved from the Bing Search API?

3 Likes

I have got the same problem. This is clearly the problem of Microsoft, links are broken on their website, documentation missing.

I have looked around for the substitute and my best shot so far is SerpApi, https://serpapi.com/

2 Likes

Oh by the way I have figured out the image call limit. Just add count inside the params

params  = {"q": search_term, "license": "public", "imageType": "photo", "count":"150"}
2 Likes

Yeah it happened very recently, and caused quite a bit of hassle for all of us while sorting out this Azure issue.

Hi everyone,

I found this library that does not require a key to query images from Bing : https://pypi.org/project/bing-image-downloader/
It is very simple to use and at least is free for more than 7 days. :slight_smile:
!pip install bing-image-downloader
from bing_image_downloader import downloader
for q in [“grizzly”, “black bear”, “teddy bear”]:
downloader.download(q, limit=20, output_dir=‘bears’, adult_filter_off=True, force_replace=False, timeout=5)
Hope it helps !
Charles

13 Likes

I’m having a very similar issue, on the code cell that has.
results = search_images_bing(key, ‘grizzly bear’)
ims = results.attrgot(‘content_url’)
len(ims)
I am receiving.
ErrorResponseException: Operation returned an invalid status code 'PermissionDenied'

@klty0988 What cell do I replace in the Clean Section of the Jupyter notebook with this code ?

Hi Jeremy,

Because of the API issue, I have actually abandoned the search_images_bing function to retrieve the image URLs.

Here is what I have done instead (to obtain the ims, which I have renamed as img_urls) to match the updated Bing Search V7 API.

search_term = "grizzly"

#Add in count parameter so that max number of images (150) is #retrieved upon each API call. Otherwise, default is 35.
params  = {"q": search_term, "license": "public", "imageType": "photo", "count":"150"}
response = requests.get(search_url, headers=headers, params=params)
response.raise_for_status()
search_results = response.json() # Generating a json file

# Collating the image URLs into a list
img_urls = [img['contentUrl'] for img in search_results["value"]]

# Get length of the img_urls list
len(img_urls)
        
# Downloading images from the list of image URLs
download_images(dest, urls=img_urls)

The above performs the same function as the code you provided, and it has worked for me. I am currently writing a Medium post on this (with the updated codes), so I will share it with you guys once it is completed.

3 Likes

Hi Kenneth,

Thank you for such a clear response. Regarding the script you provided, I was able to move past the initial error however now I’am presented with this one.
FileExistsError: [Errno 17] File exists: 'images/grizzly.jpg'

is this something you have encounter before ? and if so how do I go about this error ?

I suspect it is because you already have the grizzly.jpg image inside your images folder from your earlier attempts. You might want to consider restarting it all from a clean slate, and you can do so by clicking Factory reset runtime from the Runtime dropdown menu to reset everything and delete any pre-existing files.

Thank you for the updates, after following through with your instruction I have a new issue.

HTTPError: 401 Client Error: PermissionDenied for url: https://api.bing.microsoft.com/v7.0/images/search?q=grizzly&license=public&imageType=photo&count=150

would you be open to a zoom call , to assist me further on this ?

Update: After restarting the kernel and running the code I am presented with the error.

NameError: name 'dest' is not defined

However “dest” references: dest = 'images/grizzly.jpg'

so be default it will always grab the same image, should I be grabbing a different image from my search queries or am I missing something ?

Can someone post all the cells from the beginning to show how to solve this problem? I have been going over the solutions posted here with no success. This is a major break in the initial steps of the process for chapter two. One of the authors or someone associated with the book needs to get involved and take this issue seriously. The notebook for chapter two should be updated to reflect the recent changes by Azure that have broken the code in the notebook.

1 Like

Hi @russnagel1, you may want to try this other option :

1 Like

Charles

Thank you for helping. Here is what I entered.

@nn.Charles @russnagel1
Hi there,

I have tried both solutions posted and have not had any success. If anyone does have a working solution please upload a screenshot of your code working or a link to your notebook so that I and others may debug further and look into the issue.

This code below I entered in the cell below

“#search_images_bing”

!pip install bing-image-downloader from bing_image_downloader
import downloader
for q in ["grizzly", "black bear”, “teddy bear"]:
    downloader.download(q, limit=20, output_dir='bears', adult_filter_off=True, force_replace=False, timeout=5)

Edit: This is a community where we seek to grow together and further our skillset in deep learning. I do understand that some of you have solutions for this issue but a reminder please provide documentation and exact setup so that we may attain similar results.

1 Like

This just worked for me. Note it only downloaded 20 images per ‘limit = 20’ in the last line of code. Let me know how it works for you.

2 Likes

Hi @Alignedmind, @russnagel1,

This worked for me in Gradient.

1 Like

I am using gradient as well.

Hi @russnagel1,@nn.Charles

Awesome that was tremendous team-work, I’m so very happy that this work-around is running. Hopefully the authors take notice of this issue and adopt this work-around or produce one themselves. Thank you team and keep on deep learning :smile: !

4 Likes