Python Easy HTML Scraper: Validating scraper row results by anchor attributes inside cells?

I am editing someone else’s code in a JSON file, and I am trying to validate scraper search results using Easy HTML Scraper (not so easy!)

When testing in the terminal the following BeautifulSoup code does what I want:'table')[5].select('tr:contains("Search String")')

It grabs the rows from the table and validates that they are accurate. Unfortunately when there are no results for a search, this table contains random rows, hence the need for validation.

Although I can see in some .py files of the program bs4 is imported, using the code above (without the leading string page_soup.) in the JSON seems to not be recognised. Using EHP instead, I can select the table and rows correctly but without validation using:

"row": "find_once('table', ('class', 'forum_header_border'), order=3).find_all('tr')"

The way this table works, if one row is valid, all rows are valid. For validation I need to be able to compare an attribute of an anchor tag, of a cell, of a row but I have not been able to get this to work. I have been looking at the various options for EHP ( which return parents, like:

find_with_root(name, *args)

I have reviewed every other scrapers “row” line in the JSON, grepped here (, to try and use them for my purpose, but I have been unsuccessful.

This bin ( is an excerpt of the full HTML, containing the table, with valid rows, as there were valid search results.

Is this kind of validation possible with Easy HTML Parser? Ultimately I would like to get it comparing the attributes to {title} if possible, but a good start would be a manual string.

For testing, to rule out any issue using {title}, I am starting with trying to find the parent td tag of an anchor tag with a class ‘searchinfo’, the parent tr tag of that, and the parent table tag of that. From that table I should then be able to find all trs.

I think need something like, and this is wrong because I am either using the wrong method, wrong syntax, or the wrong order - probably all three:

"row": "find_with_root(('a', ('class', 'searchinfo')).find_with_root('td').find_all('tr')"

I only started looking at python and scraping a few days ago but if someone with more experience were to look at this they may very quickly work out the solution.