PYTHON WEB SCRAPING ERROR

import requests as r

from bs4 import BeautifulSoup as bs

url=r.get(“Courses | CodeWithHarry”)

soup=bs(url.text,‘html.parser’)

product=soup.find(‘div’,id="__next").find_all(‘div’)

for div in product:

name=div.find('div',class_="title-font text-lg font-medium text-gray-900 mb-3").text

print(name)

OUTPUT:
AttributeError Traceback (most recent call last)

~\AppData\Local\Temp/ipykernel_10228/2463085742.py in

  6 product=soup.find('div',id="__next").find_all('div')

  7 for div in product:

----> 8 name=div.find(‘div’,class_=“title-font text-lg font-medium text-gray-900 mb-3”).text

  9     print(name)

AttributeError: ‘NoneType’ object has no attribute ‘text’

Hey @IrshadKhan13 and welcome to the forum! I took the freedom to quickly clean up the code a little so we can discuss.

import requests
from bs4 import BeautifulSoup as bs

response = requests.get('https://example.org/')
soup = bs(response.text, 'html.parser')
product = soup.find('div', id='__next').find_all('div')

for div in product:
    name = div.find('div', class_='title-font text-lg font-medium text-gray-900 mb-3').text
    print(name)

First, you’re fetching some webpage and using an HTML parser to parse the response. In that response, you filter out some HTML elements. Then, you iterate over the elements. The fact that your code reaches beyond this point indicates that product is not empty. Then, within the loop, you try to use find again to narrow down the search for HTML elements a little further. However, now comes a point that find cannot find any more elements, which results in it returning None. However, you have .text there, so you’re effectively reading None.text, which isn’t a thing. Hence the error.

Try looking at the source you’re parsing one more time. You might be missing some selectors. Also, keep in mind that any modern web page might be rendered using JavaScript. In that case, what shows up in your Inspector Tools in the browser is not what you can retrieve using this method.

Feel free to ask further if you got more questions about this.