Claude Code for selenium: Browser Automation and Web Scraping in Python — Claude Skills 360 Blog
Blog / AI / Claude Code for selenium: Browser Automation and Web Scraping in Python
AI

Claude Code for selenium: Browser Automation and Web Scraping in Python

Published: May 27, 2028
Read time: 5 min read
By: Claude Skills 360

selenium automates web browsers with Python. pip install selenium. Chrome: from selenium import webdriver; driver = webdriver.Chrome(). Headless: from selenium.webdriver.chrome.options import Options; opts = Options(); opts.add_argument("--headless=new"); driver = webdriver.Chrome(options=opts). Navigate: driver.get("https://example.com"). Find: from selenium.webdriver.common.by import By; el = driver.find_element(By.CSS_SELECTOR, ".title"). Text: el.text. Attr: el.get_attribute("href"). Click: el.click(). Type: el.send_keys("query"). Clear: el.clear(). Submit: el.submit(). Wait: from selenium.webdriver.support.ui import WebDriverWait; from selenium.webdriver.support import expected_conditions as EC; WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "result"))). Actions: from selenium.webdriver.common.action_chains import ActionChains; ActionChains(driver).move_to_element(el).click().perform(). Keys: from selenium.webdriver.common.keys import Keys; el.send_keys(Keys.ENTER). Execute JS: driver.execute_script("return document.title"). Screenshot: driver.get_screenshot_as_png(). Page source: driver.page_source. Select: from selenium.webdriver.support.ui import Select; Select(el).select_by_visible_text("Option"). Iframe: driver.switch_to.frame("frame_id"). Alert: driver.switch_to.alert.accept(). Windows: driver.switch_to.window(driver.window_handles[1]). Quit: driver.quit(). webdriver-manager: pip install webdriver-manager; from webdriver_manager.chrome import ChromeDriverManager; driver = webdriver.Chrome(ChromeDriverManager().install()). Claude Code generates selenium page objects, test fixtures, scraping pipelines, and form automation.

CLAUDE.md for selenium

## selenium Stack
- Version: selenium >= 4.18 | pip install selenium webdriver-manager
- Driver: webdriver.Chrome(options=opts) | webdriver.Firefox() | webdriver.Edge()
- Headless: options.add_argument("--headless=new") (Chrome) | add_argument("-headless") (Firefox)
- Find: driver.find_element(By.CSS_SELECTOR, "sel") | By.XPATH | By.ID | By.NAME
- Wait: WebDriverWait(driver, timeout).until(EC.visibility_of_element_located((By, sel)))
- Quit: driver.quit()  # always in finally block

selenium Browser Automation Pipeline

# app/browser.py — selenium driver factory, page object, waits, scraping, and pytest
from __future__ import annotations

import logging
import time
from contextlib import contextmanager
from pathlib import Path
from typing import Any, Generator

from selenium import webdriver
from selenium.common.exceptions import (
    NoSuchElementException,
    StaleElementReferenceException,
    TimeoutException,
    WebDriverException,
)
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.remote.webdriver import WebDriver
from selenium.webdriver.remote.webelement import WebElement
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import Select, WebDriverWait


log = logging.getLogger(__name__)


# ─────────────────────────────────────────────────────────────────────────────
# 1. Driver factory
# ─────────────────────────────────────────────────────────────────────────────

def make_chrome(
    headless: bool = True,
    window_size: tuple[int, int] = (1280, 900),
    disable_images: bool = False,
    proxy: str | None = None,
    user_agent: str | None = None,
    download_dir: str | None = None,
    driver_path: str | None = None,
) -> webdriver.Chrome:
    """
    Create a Chrome WebDriver with common options pre-configured.

    headless=True: no browser window — standard for CI and servers.
    disable_images: speeds up scraping by skipping image downloads.
    proxy: "host:port" string.
    driver_path: path to chromedriver binary (auto-detected if None).

    Example:
        driver = make_chrome(headless=True, disable_images=True)
        driver.get("https://example.com")
    """
    opts = ChromeOptions()
    if headless:
        opts.add_argument("--headless=new")
    opts.add_argument(f"--window-size={window_size[0]},{window_size[1]}")
    opts.add_argument("--no-sandbox")
    opts.add_argument("--disable-dev-shm-usage")
    opts.add_argument("--disable-gpu")
    opts.add_argument("--disable-blink-features=AutomationControlled")
    opts.add_experimental_option("excludeSwitches", ["enable-automation"])
    opts.add_experimental_option("useAutomationExtension", False)

    if user_agent:
        opts.add_argument(f"--user-agent={user_agent}")
    if proxy:
        opts.add_argument(f"--proxy-server={proxy}")

    prefs: dict[str, Any] = {}
    if disable_images:
        prefs["profile.managed_default_content_settings.images"] = 2
    if download_dir:
        prefs["download.default_directory"] = str(download_dir)
    if prefs:
        opts.add_experimental_option("prefs", prefs)

    service = ChromeService(executable_path=driver_path) if driver_path else None
    return webdriver.Chrome(options=opts, service=service)


@contextmanager
def driver_ctx(browser: str = "chrome", **kwargs) -> Generator[WebDriver, None, None]:
    """
    Context manager that creates and quits a WebDriver automatically.

    Usage:
        with driver_ctx(headless=True) as driver:
            driver.get("https://example.com")
            title = driver.title
    """
    if browser == "chrome":
        driver = make_chrome(**kwargs)
    elif browser == "firefox":
        from selenium.webdriver.firefox.options import Options as FirefoxOptions
        opts = FirefoxOptions()
        if kwargs.get("headless"):
            opts.add_argument("-headless")
        driver = webdriver.Firefox(options=opts)
    else:
        raise ValueError(f"Unknown browser: {browser}")

    try:
        yield driver
    finally:
        try:
            driver.quit()
        except Exception:
            pass


# ─────────────────────────────────────────────────────────────────────────────
# 2. Wait and find helpers
# ─────────────────────────────────────────────────────────────────────────────

def wait_for(
    driver: WebDriver,
    locator: tuple[str, str],
    timeout: float = 10.0,
    condition: str = "visible",
) -> WebElement:
    """
    Wait for an element and return it.
    condition: "visible" | "clickable" | "present" | "invisible"

    Example:
        el = wait_for(driver, (By.ID, "submit-btn"), timeout=5, condition="clickable")
        el.click()
    """
    cond_map = {
        "visible":   EC.visibility_of_element_located,
        "clickable": EC.element_to_be_clickable,
        "present":   EC.presence_of_element_located,
    }
    if condition == "invisible":
        EC.invisibility_of_element_located(locator)
        return WebDriverWait(driver, timeout).until(
            EC.invisibility_of_element_located(locator)
        )
    fn = cond_map.get(condition, EC.visibility_of_element_located)
    return WebDriverWait(driver, timeout).until(fn(locator))


def find(
    driver: WebDriver | WebElement,
    selector: str,
    by: str = By.CSS_SELECTOR,
    default: WebElement | None = None,
) -> WebElement | None:
    """
    Safe find_element — returns None (or default) instead of raising.

    Example:
        price = find(card, ".price")
        text = price.text if price else "N/A"
    """
    try:
        return driver.find_element(by, selector)
    except NoSuchElementException:
        return default


def find_all(
    driver: WebDriver | WebElement,
    selector: str,
    by: str = By.CSS_SELECTOR,
) -> list[WebElement]:
    """Safe find_elements — always returns a list, never raises."""
    try:
        return driver.find_elements(by, selector)
    except WebDriverException:
        return []


def get_text(el: WebElement | None, default: str = "") -> str:
    """Get element text, returning default on None or stale element."""
    if el is None:
        return default
    try:
        return el.text.strip()
    except StaleElementReferenceException:
        return default


def get_attr(el: WebElement | None, attr: str, default: str = "") -> str:
    """Get element attribute, returning default on None."""
    if el is None:
        return default
    try:
        return el.get_attribute(attr) or default
    except StaleElementReferenceException:
        return default


# ─────────────────────────────────────────────────────────────────────────────
# 3. Interactions
# ─────────────────────────────────────────────────────────────────────────────

def scroll_to(driver: WebDriver, element: WebElement) -> None:
    """Scroll the page until element is visible."""
    driver.execute_script("arguments[0].scrollIntoView({block: 'center'});", element)
    time.sleep(0.2)


def slow_type(element: WebElement, text: str, delay: float = 0.05) -> None:
    """Type text into an element one character at a time (human-like)."""
    element.clear()
    for char in text:
        element.send_keys(char)
        time.sleep(delay)


def select_option(element: WebElement, text: str | None = None,
                  value: str | None = None, index: int | None = None) -> None:
    """Select a <select> dropdown option by visible text, value, or index."""
    sel = Select(element)
    if text is not None:
        sel.select_by_visible_text(text)
    elif value is not None:
        sel.select_by_value(value)
    elif index is not None:
        sel.select_by_index(index)


def js_click(driver: WebDriver, element: WebElement) -> None:
    """Click element via JavaScript (bypasses overlay blocking issues)."""
    driver.execute_script("arguments[0].click();", element)


def hover(driver: WebDriver, element: WebElement) -> None:
    """Move the mouse to hover over an element."""
    ActionChains(driver).move_to_element(element).perform()


def dismiss_alert(driver: WebDriver, accept: bool = True) -> str:
    """Accept or dismiss a JavaScript alert/confirm. Returns alert text."""
    alert = driver.switch_to.alert
    text  = alert.text
    alert.accept() if accept else alert.dismiss()
    return text


# ─────────────────────────────────────────────────────────────────────────────
# 4. Scraping helpers
# ─────────────────────────────────────────────────────────────────────────────

def scrape_table(driver: WebDriver, css: str = "table") -> list[dict]:
    """
    Scrape an HTML table into a list of dicts using the header row as keys.

    Example:
        driver.get("https://example.com/data")
        rows = scrape_table(driver, "table.results")
    """
    tables = find_all(driver, css)
    if not tables:
        return []

    table   = tables[0]
    headers = [get_text(th) for th in find_all(table, "th")]
    results = []

    for tr in find_all(table, "tbody tr, tr:not(:first-child)")[1:] if not headers else find_all(table, "tbody tr"):
        cells = [get_text(td) for td in find_all(tr, "td")]
        if cells and any(cells):
            row = dict(zip(headers, cells)) if headers else {str(i): v for i, v in enumerate(cells)}
            results.append(row)

    return results


def scrape_listing(
    driver: WebDriver,
    item_css: str,
    fields: dict[str, str],
) -> list[dict]:
    """
    Scrape a list of items from a page.
    fields: {"field_name": "css_selector"} — each applied to the item element.

    Example:
        driver.get("https://shop.example.com/products")
        products = scrape_listing(driver, ".product-card", {
            "name":  "h3",
            "price": ".price",
            "href":  "a",  # uses href attribute
        })
    """
    results = []
    for card in find_all(driver, item_css):
        row: dict = {}
        for field_name, sel in fields.items():
            el = find(card, sel)
            if el:
                href = el.get_attribute("href")
                row[field_name] = href if field_name in ("href", "url", "link") and href else get_text(el)
            else:
                row[field_name] = ""
        results.append(row)
    return results


def paginate_and_scrape(
    driver: WebDriver,
    scrape_fn,
    next_btn_css: str = ".next-page, a[rel='next']",
    max_pages: int = 50,
    page_delay: float = 1.5,
) -> list:
    """
    Scrape multiple pages by clicking a next-page button.
    scrape_fn(driver) → list — called each page.
    Returns combined results from all pages.
    """
    all_results = []
    for page_num in range(1, max_pages + 1):
        results = scrape_fn(driver)
        all_results.extend(results)
        log.info("Page %d: scraped %d items (total: %d)", page_num, len(results), len(all_results))

        next_btn = find(driver, next_btn_css)
        if next_btn is None or not next_btn.is_enabled():
            break

        try:
            scroll_to(driver, next_btn)
            next_btn.click()
            time.sleep(page_delay)
        except Exception as e:
            log.warning("Could not click next page: %s", e)
            break

    return all_results


# ─────────────────────────────────────────────────────────────────────────────
# 5. Page Object base class
# ─────────────────────────────────────────────────────────────────────────────

class BasePage:
    """
    Base class for Page Object Model pattern.
    Inherit and define element locators + methods per page.

    Usage:
        class LoginPage(BasePage):
            def login(self, email, password):
                self.find((By.ID, "email")).send_keys(email)
                self.find((By.ID, "password")).send_keys(password)
                self.wait_and_click((By.ID, "submit"))

        with driver_ctx(headless=True) as driver:
            page = LoginPage(driver, "https://app.example.com/login")
            page.login("[email protected]", "secret")
    """

    def __init__(self, driver: WebDriver, url: str | None = None) -> None:
        self.driver = driver
        if url:
            driver.get(url)

    def find(self, locator: tuple, timeout: float = 10.0) -> WebElement:
        return wait_for(self.driver, locator, timeout=timeout, condition="present")

    def wait_and_click(self, locator: tuple, timeout: float = 10.0) -> None:
        el = wait_for(self.driver, locator, timeout=timeout, condition="clickable")
        scroll_to(self.driver, el)
        el.click()

    def get_text(self, locator: tuple, timeout: float = 10.0) -> str:
        el = wait_for(self.driver, locator, timeout=timeout, condition="visible")
        return el.text.strip()

    def fill_field(self, locator: tuple, value: str, clear: bool = True) -> None:
        el = wait_for(self.driver, locator, timeout=10, condition="clickable")
        if clear:
            el.clear()
        el.send_keys(value)

    def screenshot(self, path: str | Path) -> Path:
        p = Path(path)
        self.driver.get_screenshot_as_file(str(p))
        return p

    @property
    def title(self) -> str:
        return self.driver.title

    @property
    def url(self) -> str:
        return self.driver.current_url


# ─────────────────────────────────────────────────────────────────────────────
# 6. pytest fixture
# ─────────────────────────────────────────────────────────────────────────────

PYTEST_FIXTURE = '''
# conftest.py — selenium pytest fixtures
import pytest
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from app.browser import make_chrome, driver_ctx

@pytest.fixture(scope="session")
def chrome_driver():
    """Session-scoped Chrome driver — shared across all tests in the session."""
    driver = make_chrome(headless=True)
    yield driver
    driver.quit()

@pytest.fixture
def driver():
    """Function-scoped Chrome driver — fresh instance per test."""
    with driver_ctx(headless=True) as drv:
        yield drv

# Example test:
# def test_homepage_title(driver):
#     driver.get("https://example.com")
#     assert "Example Domain" in driver.title
'''


# ─────────────────────────────────────────────────────────────────────────────
# Demo
# ─────────────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    print("selenium demo requires a browser + chromedriver.")
    print("Quick usage:")
    print()
    print("  with driver_ctx(headless=True) as driver:")
    print("    driver.get('https://example.com')")
    print("    print(driver.title)")
    print("    png = driver.get_screenshot_as_png()")
    print("    Path('screenshot.png').write_bytes(png)")
    print()
    print("Install chromedriver: pip install webdriver-manager")
    print("Then: from webdriver_manager.chrome import ChromeDriverManager")
    print("      service = ChromeService(ChromeDriverManager().install())")
    print("      driver = webdriver.Chrome(options=opts, service=service)")

For the playwright alternative — Playwright (via playwright-python) supports Chromium, Firefox, and WebKit with auto-wait semantics, network interception, multiple contexts, and a faster CDP-based protocol; selenium uses the WebDriver standard which has broader enterprise tool support (Selenium Grid, BrowserStack, SauceLabs) and the largest ecosystem of tutorials and examples — use Playwright for new projects where speed and auto-waiting matter, selenium when your org uses existing Selenium Grid infrastructure or third-party recording tools. For the requests / httpx alternative — requests and httpx fetch raw HTML/JSON at network level; selenium controls a real browser rendering JavaScript, executing page scripts, and handling authentication flows — use selenium when the page requires JavaScript execution, cookie-based sessions, or simulating real user interactions (click, type, scroll). The Claude Skills 360 bundle includes selenium skill sets covering make_chrome() headless factory, driver_ctx() context manager, wait_for() with visible/clickable/present conditions, find()/find_all() safe wrappers, get_text()/get_attr() null-safe helpers, scroll_to()/slow_type()/select_option() interactions, js_click()/hover()/dismiss_alert(), scrape_table()/scrape_listing() extractors, paginate_and_scrape() multi-page crawler, BasePage page object pattern, and pytest session/function fixtures. Start with the free tier to try browser automation and web scraping code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free