Claude Code for spaCy: Industrial-Strength NLP — Claude Skills 360 Blog
Blog / AI / Claude Code for spaCy: Industrial-Strength NLP
AI

Claude Code for spaCy: Industrial-Strength NLP

Published: October 31, 2027
Read time: 5 min read
By: Claude Skills 360

spaCy provides industrial-strength NLP with pretrained pipelines. pip install spacy, python -m spacy download en_core_web_sm. import spacy. Load: nlp = spacy.load("en_core_web_sm") — CPU. Transformer: "en_core_web_trf" (requires pip install spacy-transformers). Process: doc = nlp("Apple is headquartered in Cupertino."). Tokens: for token in doc: token.text, token.lemma_, token.pos_, token.dep_, token.is_stop. Entities: for ent in doc.ents: ent.text, ent.label_, ent.start_char, ent.end_char. Batch: docs = list(nlp.pipe(texts, batch_size=64)). Sentences: for sent in doc.sents: sent.text. Noun chunks: doc.noun_chunks. Matcher: from spacy.matcher import Matcher, matcher = Matcher(nlp.vocab), matcher.add("PATTERN", [[{"LOWER":"iphone"},{"IS_DIGIT":True}]]), matches = matcher(doc). PhraseMatcher: from spacy.matcher import PhraseMatcher, pm = PhraseMatcher(nlp.vocab), pm.add("BRAND", [nlp.make_doc("Apple Inc")]). EntityRuler: ruler = nlp.add_pipe("entity_ruler"), ruler.add_patterns([{"label":"ORG","pattern":"OpenAI"}]). Custom component: @Language.component("my_comp"), def my_comp(doc): ... return doc. displacy: from spacy import displacy, displacy.render(doc, style="ent") — HTML. Serialize: doc.to_disk("/path"), DocBin for batches. Training: spacy train config.cfg --output ./model. Claude Code generates spaCy NLP pipelines, NER trainers, rule-based extractors, text classifiers, and custom component scripts.

CLAUDE.md for spaCy

## spaCy Stack
- Version: spacy >= 3.7
- Models: en_core_web_sm | en_core_web_md | en_core_web_lg | en_core_web_trf
- Load: nlp = spacy.load("en_core_web_sm") | spacy.blank("en")
- Process: doc = nlp(text) | docs = list(nlp.pipe(texts, batch_size=64))
- Tokens: token.text | .lemma_ | .pos_ | .dep_ | .is_stop | .ent_type_
- Entities: doc.ents → (ent.text, ent.label_, ent.start_char, ent.end_char)
- Rules: Matcher(vocab) | PhraseMatcher | nlp.add_pipe("entity_ruler")
- Custom: @Language.component("name") → nlp.add_pipe("name")
- Train: spacy train config.cfg --output ./output

spaCy NLP Pipeline

# nlp/spacy_pipeline.py — industrial NLP with spaCy
from __future__ import annotations
import re
from collections import Counter, defaultdict
from pathlib import Path
from typing import Iterator

import spacy
from spacy import displacy
from spacy.language import Language
from spacy.matcher import Matcher, PhraseMatcher
from spacy.tokens import Doc, Span, Token
from spacy.training import Example


# ── 1. Model loading ──────────────────────────────────────────────────────────

def load_nlp(
    model:    str  = "en_core_web_sm",
    disable:  list[str] = None,
    exclude:  list[str] = None,
) -> Language:
    """
    Load a spaCy pipeline.

    Models (English):
    - en_core_web_sm  — 12MB, CPU, fast (NER+POS+DEP)
    - en_core_web_md  — 43MB, + word vectors
    - en_core_web_lg  — 741MB, + larger word vectors
    - en_core_web_trf — transformer (requires spacy-transformers)

    disable: skip components but keep in pipeline (e.g., ["ner"])
    exclude: remove components entirely (e.g., ["parser"] for faster NER-only)
    """
    nlp = spacy.load(model, disable=disable or [], exclude=exclude or [])
    print(f"spaCy {spacy.__version__} | model={model} | pipes={nlp.pipe_names}")
    return nlp


def create_blank(lang: str = "en") -> Language:
    """Create a blank pipeline for building from scratch."""
    return spacy.blank(lang)


# ── 2. Text processing ────────────────────────────────────────────────────────

def process(nlp: Language, text: str) -> Doc:
    """Process a single text."""
    return nlp(text)


def process_batch(
    nlp:        Language,
    texts:      list[str],
    batch_size: int = 64,
    n_process:  int = 1,   # Parallelism (1 = no multiprocessing)
) -> list[Doc]:
    """
    Process many texts efficiently with nlp.pipe.
    n_process > 1 uses multiprocessing (not compatible with GPU).
    """
    return list(nlp.pipe(texts, batch_size=batch_size, n_process=n_process))


# ── 3. Token-level extraction ─────────────────────────────────────────────────

def extract_tokens(doc: Doc, include_punct: bool = False, include_stop: bool = True) -> list[dict]:
    """Extract token features as a list of dicts."""
    tokens = []
    for tok in doc:
        if not include_punct and tok.is_punct:
            continue
        if not include_stop and tok.is_stop:
            continue
        tokens.append({
            "text":    tok.text,
            "lemma":   tok.lemma_,
            "pos":     tok.pos_,      # Universal POS
            "tag":     tok.tag_,      # Fine-grained POS
            "dep":     tok.dep_,      # Dependency label
            "is_stop": tok.is_stop,
            "is_alpha": tok.is_alpha,
        })
    return tokens


def lemmatize(nlp: Language, text: str) -> list[str]:
    """Lemmatize all non-stop, non-punct tokens."""
    doc = nlp(text)
    return [tok.lemma_ for tok in doc if not tok.is_stop and not tok.is_punct and tok.is_alpha]


def get_noun_phrases(doc: Doc) -> list[str]:
    """Extract noun chunks (base noun phrases)."""
    return [chunk.text for chunk in doc.noun_chunks]


def get_word_frequencies(
    nlp:   Language,
    texts: list[str],
    min_count: int = 2,
) -> dict[str, int]:
    """Count word frequencies across a corpus (excluding stop words)."""
    counter: Counter = Counter()
    for doc in nlp.pipe(texts, batch_size=64):
        for tok in doc:
            if tok.is_alpha and not tok.is_stop and len(tok.text) > 2:
                counter[tok.lemma_.lower()] += 1
    return {w: c for w, c in counter.most_common() if c >= min_count}


# ── 4. Named entity recognition ───────────────────────────────────────────────

def extract_entities(doc: Doc) -> list[dict]:
    """Extract named entities from a doc."""
    return [
        {
            "text":       ent.text,
            "label":      ent.label_,
            "start_char": ent.start_char,
            "end_char":   ent.end_char,
            "start_tok":  ent.start,
            "end_tok":    ent.end,
        }
        for ent in doc.ents
    ]


def extract_entities_by_type(doc: Doc, entity_type: str) -> list[str]:
    """Get all entities of a specific type (e.g., 'ORG', 'PERSON', 'GPE')."""
    return [ent.text for ent in doc.ents if ent.label_ == entity_type]


def entity_counts(
    nlp:   Language,
    texts: list[str],
    types: list[str] = None,
) -> dict[str, Counter]:
    """Count entity occurrences across a corpus."""
    counts: dict[str, Counter] = defaultdict(Counter)
    for doc in nlp.pipe(texts, batch_size=64):
        for ent in doc.ents:
            if types is None or ent.label_ in types:
                counts[ent.label_][ent.text] += 1
    return dict(counts)


# ── 5. Rule-based matching ────────────────────────────────────────────────────

def build_token_matcher(
    nlp:      Language,
    patterns: dict[str, list[list[dict]]],
) -> Matcher:
    """
    Build a token-pattern Matcher.
    patterns: {"LABEL": [[{token_attr: value}, ...]]} 
    
    Token attributes: LOWER, TEXT, LEMMA, POS, TAG, DEP, IS_DIGIT, IS_ALPHA, ORTH
    """
    matcher = Matcher(nlp.vocab)
    for label, pattern_list in patterns.items():
        matcher.add(label, pattern_list)
    return matcher


def build_phrase_matcher(
    nlp:      Language,
    phrases:  dict[str, list[str]],
    attr:     str = "LOWER",  # "LOWER" | "TEXT" | "LEMMA"
) -> PhraseMatcher:
    """
    Build a PhraseMatcher for bulk substring matching.
    phrases: {"LABEL": ["phrase1", "phrase2", ...]}
    """
    pm = PhraseMatcher(nlp.vocab, attr=attr)
    for label, phrase_list in phrases.items():
        patterns = [nlp.make_doc(p) for p in phrase_list]
        pm.add(label, patterns)
    return pm


def match_text(
    doc:        Doc,
    matcher:    Matcher | PhraseMatcher,
) -> list[dict]:
    """Run matcher on doc and return structured matches."""
    matches = matcher(doc)
    results = []
    for match_id, start, end in matches:
        span = doc[start:end]
        results.append({
            "label": doc.vocab.strings[match_id],
            "text":  span.text,
            "start_char": span.start_char,
            "end_char":   span.end_char,
        })
    return results


# ── 6. Entity ruler (gazetteer) ───────────────────────────────────────────────

def add_entity_ruler(
    nlp:      Language,
    patterns: list[dict],
    before:   str = "ner",
    overwrite: bool = False,
) -> Language:
    """
    Add an EntityRuler for dictionary-based NER.
    patterns: [{"label": "ORG", "pattern": "OpenAI"}, {"label": "PRODUCT", "pattern": [{"LOWER": "iphone"}, {"IS_DIGIT": True}]}]
    The ruler runs before the statistical NER by default.
    """
    config = {"overwrite_ents": overwrite}
    ruler  = nlp.add_pipe("entity_ruler", config=config, before=before)
    ruler.add_patterns(patterns)
    return nlp


# ── 7. Custom pipeline component ─────────────────────────────────────────────

def add_sentence_stats_component(nlp: Language) -> Language:
    """
    Example custom component: adds sentence stats as Doc extension attributes.
    """
    if not Doc.has_extension("n_sentences"):
        Doc.set_extension("n_sentences", default=0)
    if not Doc.has_extension("avg_sent_len"):
        Doc.set_extension("avg_sent_len", default=0.0)
    if not Doc.has_extension("entity_density"):
        Doc.set_extension("entity_density", default=0.0)

    @Language.component("sentence_stats")
    def sentence_stats(doc: Doc) -> Doc:
        sents     = list(doc.sents)
        n_sents   = len(sents)
        avg_len   = sum(len(s) for s in sents) / max(n_sents, 1)
        ent_density = len(doc.ents) / max(len(doc), 1)

        doc._.n_sentences    = n_sents
        doc._.avg_sent_len   = round(avg_len, 2)
        doc._.entity_density = round(ent_density, 4)
        return doc

    if "sentence_stats" not in nlp.pipe_names:
        nlp.add_pipe("sentence_stats", last=True)
    return nlp


# ── 8. Training data preparation ─────────────────────────────────────────────

def make_ner_examples(
    nlp:  Language,
    data: list[tuple[str, dict]],
) -> list[Example]:
    """
    Convert (text, {"entities": [(start, end, label)]}) training data to Examples.
    Skips misaligned annotations automatically using alignment_mode="contract".
    """
    examples = []
    for text, annotations in data:
        doc  = nlp.make_doc(text)
        ents = []
        for start, end, label in annotations.get("entities", []):
            span = doc.char_span(start, end, label=label, alignment_mode="contract")
            if span is not None:
                ents.append(span)
        doc.set_ents(ents)
        examples.append(Example.from_dict(doc, {"entities": [
            (span.start_char, span.end_char, span.label_) for span in ents
        ]}))
    return examples


def train_ner(
    nlp:        Language,
    examples:   list[Example],
    n_iter:     int   = 30,
    drop:       float = 0.2,
    save_path:  str   = "./ner-model",
) -> Language:
    """
    Train NER component from scratch on provided examples.
    nlp should be a blank model with NER added: nlp.add_pipe("ner").
    """
    ner = nlp.get_pipe("ner")

    # Add labels
    for ex in examples:
        for ent in ex.reference.ents:
            ner.add_label(ent.label_)

    other_pipes = [p for p in nlp.pipe_names if p != "ner"]
    with nlp.disable_pipes(*other_pipes):
        optimizer = nlp.initialize()
        for i in range(n_iter):
            losses: dict = {}
            nlp.update(examples, drop=drop, sgd=optimizer, losses=losses)
            if (i + 1) % 10 == 0:
                print(f"  iter {i+1}/{n_iter} | NER loss: {losses.get('ner', 0):.3f}")

    nlp.to_disk(save_path)
    print(f"NER model saved: {save_path}")
    return nlp


# ── 9. Visualization ──────────────────────────────────────────────────────────

def visualize_entities(
    doc:    Doc,
    style:  str = "ent",
    return_html: bool = True,
) -> str | None:
    """Render entity or dependency visualization as HTML."""
    return displacy.render(doc, style=style, jupyter=False) if return_html else None


# ── Demo ──────────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    nlp = load_nlp("en_core_web_sm")

    texts = [
        "Apple Inc. was founded by Steve Jobs in Cupertino, California.",
        "Elon Musk's Tesla and SpaceX are headquartered in Austin, Texas.",
        "The European Union imposed a €1.49 billion fine on Google in 2023.",
    ]

    print("Named Entity Recognition:")
    for text in texts:
        doc  = nlp(text)
        ents = extract_entities(doc)
        orgs = extract_entities_by_type(doc, "ORG")
        people = extract_entities_by_type(doc, "PERSON")
        print(f"\n  '{text[:60]}...'")
        print(f"    ORG: {orgs}")
        print(f"    PERSON: {people}")

    print("\nNoun phrases:")
    doc = nlp(texts[0])
    print(f"  {get_noun_phrases(doc)}")

    print("\nLemmatized tokens:")
    print(f"  {lemmatize(nlp, texts[0])}")

    print("\nToken patterns — iPhone model numbers:")
    matcher = build_token_matcher(nlp, {
        "IPHONE_MODEL": [[{"LOWER": "iphone"}, {"IS_DIGIT": True}]],
    })
    test_doc = nlp("I have an iPhone 15 and my friend has an iPhone 14 Pro.")
    matches = match_text(test_doc, matcher)
    print(f"  {matches}")

    print("\nEntity ruler example:")
    nlp2 = load_nlp("en_core_web_sm")
    add_entity_ruler(nlp2, [
        {"label": "AI_COMPANY", "pattern": "OpenAI"},
        {"label": "AI_COMPANY", "pattern": "Anthropic"},
        {"label": "AI_MODEL",   "pattern": "Claude"},
    ])
    doc2 = nlp2("Anthropic released Claude, which competes with OpenAI's ChatGPT.")
    print(f"  {[(e.text, e.label_) for e in doc2.ents]}")

For the NLTK alternative when performing classical NLP tasks like n-gram language models, WordNet lookups, or corpus-driven linguistics research — NLTK’s breadth of corpora and algorithms is unmatched for research while spaCy’s industrial design with pre-built pipelines, blazing nlp.pipe batch processing, and Cython-optimized tokenizer runs 10-100x faster on production text volumes, making it the clear choice for building APIs and microservices that process user-submitted text at scale. For the Hugging Face Transformers NLP alternative when needing state-of-the-art transformer NER, zero-shot classification, or question answering with the full BERT/RoBERTa ecosystem — transformers provides SOTA accuracy while spaCy’s en_core_web_trf pipeline wraps RoBERTa for transformer-quality NER with spaCy’s unified Doc API, and the EntityRuler + Matcher combination for rule-based pattern extraction has no equivalent in transformers, making spaCy the right choice when you need both statistical and symbolic NLP in one pipeline. The Claude Skills 360 bundle includes spaCy skill sets covering model loading, batch processing, token and entity extraction, noun chunk and lemma analysis, Matcher and PhraseMatcher rule patterns, EntityRuler gazetteers, custom pipeline components, NER training, and displacy visualization. Start with the free tier to try NLP pipeline code generation.

Keep Reading

AI

Claude Code for email.contentmanager: Python Email Content Accessors

Read and write EmailMessage body content with Python's email.contentmanager module and Claude Code — email contentmanager ContentManager for the class that maps content types to get and set handler functions allowing EmailMessage to support get_content and set_content with type-specific behaviour, email contentmanager raw_data_manager for the ContentManager instance that handles raw bytes and str payloads without any conversion, email contentmanager content_manager for the standard ContentManager instance used by email.policy.default that intelligently handles text plain text html multipart and binary content types, email contentmanager get_content_text for the handler that returns the decoded text payload of a text-star message part as a str, email contentmanager get_content_binary for the handler that returns the raw decoded bytes payload of a non-text message part, email contentmanager get_data_manager for the get-handler lookup used by EmailMessage get_content to find the right reader function for the content type, email contentmanager set_content text for the handler that creates and sets a text part correctly choosing charset and transfer encoding, email contentmanager set_content bytes for the handler that creates and sets a binary part with base64 encoding and optional filename Content-Disposition, email contentmanager EmailMessage get_content for the method that reads the message body using the registered content manager handlers, email contentmanager EmailMessage set_content for the method that sets the message body and MIME headers in one call, email contentmanager EmailMessage make_alternative make_mixed make_related for the methods that convert a simple message into a multipart container, email contentmanager EmailMessage add_attachment for the method that attaches a file or bytes to a multipart message, and email contentmanager integration with email.message and email.policy and email.mime and io for building high-level email readers attachment extractors text body accessors HTML readers and policy-aware MIME construction pipelines.

5 min read Feb 12, 2029
AI

Claude Code for email.charset: Python Email Charset Encoding

Control header and body encoding for international email with Python's email.charset module and Claude Code — email charset Charset for the class that wraps a character set name with the encoding rules for header encoding and body encoding describing how to encode text for that charset in email messages, email charset Charset header_encoding for the attribute specifying whether headers using this charset should use QP quoted-printable encoding BASE64 encoding or no encoding, email charset Charset body_encoding for the attribute specifying the Content-Transfer-Encoding to use for message bodies in this charset such as QP or BASE64, email charset Charset output_codec for the attribute giving the Python codec name used to encode the string to bytes for the wire format, email charset Charset input_codec for the attribute giving the Python codec name used to decode incoming bytes to str, email charset Charset get_output_charset for returning the output charset name, email charset Charset header_encode for encoding a header string using the charset's header_encoding method, email charset Charset body_encode for encoding body content using the charset's body_encoding, email charset Charset convert for converting a string from the input_codec to the output_codec, email charset add_charset for registering a new charset with custom encoding rules in the global charset registry, email charset add_alias for adding an alias name that maps to an existing registered charset, email charset add_codec for registering a codec name mapping for use by the charset machinery, and email charset integration with email.message and email.mime and email.policy and email.encoders for building international email senders non-ASCII header encoders Content-Transfer-Encoding selectors charset-aware message constructors and MIME encoding pipelines.

5 min read Feb 11, 2029
AI

Claude Code for email.utils: Python Email Address and Header Utilities

Parse and format RFC 2822 email addresses and dates with Python's email.utils module and Claude Code — email utils parseaddr for splitting a display-name plus angle-bracket address string into a realname and email address tuple, email utils formataddr for combining a realname and address string into a properly quoted RFC 2822 address with angle brackets, email utils getaddresses for parsing a list of raw address header strings each potentially containing multiple comma-separated addresses into a list of realname address tuples, email utils parsedate for parsing an RFC 2822 date string into a nine-tuple compatible with time.mktime, email utils parsedate_tz for parsing an RFC 2822 date string into a ten-tuple that includes the UTC offset timezone in seconds, email utils parsedate_to_datetime for parsing an RFC 2822 date string into an aware datetime object with timezone, email utils formatdate for formatting a POSIX timestamp or the current time as an RFC 2822 date string with optional usegmt and localtime flags, email utils format_datetime for formatting a datetime object as an RFC 2822 date string, email utils make_msgid for generating a globally unique Message-ID string with optional idstring and domain components, email utils decode_rfc2231 for decoding an RFC 2231 encoded parameter value into a tuple of charset language and value, email utils encode_rfc2231 for encoding a string as an RFC 2231 encoded parameter value, email utils collapse_rfc2231_value for collapsing a decoded RFC 2231 tuple to a Unicode string, and email utils integration with email.message and email.headerregistry and datetime and time for building address parsers date formatters message-id generators header extractors and RFC-compliant email construction utilities.

5 min read Feb 10, 2029

Put these ideas into practice

Claude Skills 360 gives you production-ready skills for everything in this article — and 2,350+ more. Start free or go all-in.

Back to Blog

Get 360 skills free