I am trying to get all the links of the courses on this page https://cursosdev.com/coupons , but when executing my script it returns an empty array []. I have been testing on other web pages and it works, but rarely on this page it does not work for me, any option that I am skipping?
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = 'https://cursosdev.com/coupons'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
#Extraccion
eq = soup.find_all('a', class_='c-card block bg-white shadow-md hover:shadow-xl rounded-lg overflow-hidden'.replace(' ','.'))
print(eq)
The problem you have is that the server is blocking requests from the User-Agent of requests, if you look at the response you are receiving, it is a 403 forbidden.
It is necessary that you change the User-Agent when making the request to skip the restriction, in addition, you are replacing the spaces with periods in the string to be searched in find_all, so if there was something, it would not find it either since the Classes appear with spaces and not periods.