My Time with (Paid) AI 2: Claude Code Boogaloo
This is the second post in a series of ongoing experiments with paid AI models, specifically Claude Opus 4.6. The same warning from the last experiment applies:
This blog post is going to be more fluid than most. Rather than revise my thoughts over and over again, pruning and editing context for future me (and others, if they choose) to reflect on and digest, this will instead be very much "train of thought"/"stream of consciousness".
It will have errors. It will have mistakes. There will be stupidity, idiocy, and general negativity. I will absolutely be a snarky bitch throughout. There will also (hopefully) be some surprises along the way, otherwise I'm out twenty-one dollars and change for bupkis and fully validated that this thing is stupid and overhyped.
So, y'know, be warned.
Alright, let's begin.
I've always been fascinated by weather. In grade school, I borrowed a collegiate-level meteorology textbook from the public library and read it over the summer while tracking hurricanes on a paper map. In High School, I'd "borrow" my Dad's old VHS camcorder and go storm chasing in my Grand Am, hoping for tornados but only dinging my body with hail. In college, I took a proper course in meteorology to understand what had changed since my foray into the science, and the myriad of systems that feedback on one another to create the most powerful natural forces on the planet.
I'm also an Eagle Scout. With OCD. Which is to say, Emergency Preparedness is top of mind for me to an unreasonable degree. As in, I know where to go in the event of a nuclear detonation - and which areas are lost causes since they're prime targets.
When I lived in the midwest, part of Emergency Preparedness was the obligatory weather radio, complete with its screeching alarm every time the clouds sneezed overhead. Better to be safe than sorry, after all. Since moving to New England, the need for such a device is lessened due to the lack of severe weather events in general, and smartphones in our possession that can send push notifications in the event of a threat, in theory.
In practice, my experience with iOS weather alerts has been...dismal, to say the least. Turning off Amber Alerts (which were overused to the point of falling into the background noise) has also lessened the frequency of severe weather alerts being received, as has the practical reality that my apartment is effectively a deathtrap for cellular signals (I ran a 3G femtocell for a while, in the pre-WiFi-calling days).
Coincidentally, I have a local Home Assistant I've been dying to get more use out of. While I've already coded my own little widgets for displaying current alerts using NWS API calls, they lack the pizazz of a proper add-in, like colors, alerts, push notifications, etc.
I also have a Claude Pro subscription, and a desire to try agents out.
You can see where this is going.
Project #2: Weather Alerts for Home Assistant
First, we bootstrap Claude Code onto my Macbook Pro.
curl -fsSL https://claude.ai/install.sh | bash
...and export the $PATH it's looking for:echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc && source ~/.zshrc
Okie dokie. Hang on to your butts...
claude
And we get a nifty little setup process that I didn't bother screenshotting for you, sorry. Anyway, I finish setup in the terminal, I create a new GitHub repo for the project, cd into the folder, and relaunch Claude Code to give it access to the repo. I run /init on the empty folder, Claude spends 15.3k tokens analyzing empty space to build a CLAUDE.md file, and we're ready to start building.
I've never done this before, so I'm going to give it the big, meaty project description and key requirements, all in one prompt:Let's start building this Home Assistant module. It needs to get all weather alerts from the National Weather Service API, using either the location information from the Home Assistant itself (GEO coordinates) or the zip code provided by the user. This data should be refreshed every five minutes if there's no alerts, and every thirty seconds if there's an active storm warning. External dependencies (e.g., python packages) are forbidden; code must be self-contained within the module to avoid licensing issues. The output should be a summary of the alert (e.g., "Severe Thunderstorm Watch") along with the full text if clicked on. For Severe Thunderstorm, Tornado alerts, push notifications should be sent to the Home Assistant app on mobile devices. All code should be documented for ease of maintenance by humans or bots in the future.
And off we go. Time to make myself lunch, if this is to be belie-

Or not, that's fine. Turns out it wants permission to do a web search. That's...weird, but honestly as someone who-

-sorry, just another fetch request approval. As I was-

Sorry, it needed approval to browse the National Weather Service. Ahem, as I was saying, I appreciate the explicit approval requests rather than assuming intent. That was a grievance of my prior post with the web UI, so seeing Claude Code politely go, "Hey, I wanna peep this site, can I do that?" is kinda nice.
Now, about that sandwich...
I made myself a PB&J because I am unemployed, and the bread was B1G1 today.
Meanwhile, Claude actually generated some code. Let's take a look, shall we?
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Home Assistant custom integration for NWS (National Weather Service) weather alerts. Fetches active alerts for a configured US location and exposes them as sensor entities with push notifications for severe weather.
## Architecture
```
custom_components/weather_alerts/
├── __init__.py # Integration setup, entry load/unload
├── manifest.json # Integration metadata (zero external dependencies)
├── const.py # Domain, config keys, intervals, alert types, API URLs
├── coordinator.py # DataUpdateCoordinator with variable polling (5min/30sec)
├── sensor.py # Sensor entities (summary + count)
├── config_flow.py # UI config: zip code OR lat/lon coordinates
├── strings.json # UI strings for config flow
└── translations/
└── en.json # English translations
```
## Key Design Decisions
- **Zero external dependencies**: Uses HA's built-in aiohttp session via `async_get_clientsession()`
- **Variable polling**: 5 min default, 30 sec when tornado/severe thunderstorm warnings are active
- **Location resolution**: Supports US zip codes (via Census Bureau geocoding API) or direct lat/lon
- **Push notifications**: Automatic for tornado/severe thunderstorm warnings via `notify.notify` service; deduplicates by alert ID
- **NWS API**: Requires User-Agent header with contact info; coordinates rounded to 4 decimal places
## Installation
Copy `custom_components/weather_alerts/` to your Home Assistant `custom_components/` directory, restart HA, then add via Settings > Integrations > Add Integration > "Weather Alerts".
## Development Notes
- All API calls use HA's shared aiohttp session (no direct `aiohttp` imports needed)
- Config flow validates location by making a test API call before saving
- Coordinator caches resolved coordinates to avoid repeated geocoding
- `strings.json` and `translations/en.json` must stay in sync
...wow. Look, even if this turns out to just be confident bullshitting, this is some confident as fuck bullshitting. Welp, time to brush up on my Python and see what it generated in earnest. Let's start with the init.py file.
"""Weather Alerts integration for Home Assistant."""
from __future__ import annotations
import logging
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import Platform
from homeassistant.core import HomeAssistant
from .const import DOMAIN
from .coordinator import WeatherAlertsCoordinator
_LOGGER = logging.getLogger(__name__)
PLATFORMS = [Platform.SENSOR]
async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
"""Set up Weather Alerts from a config entry."""
coordinator = WeatherAlertsCoordinator(hass, dict(entry.data))
await coordinator.async_config_entry_first_refresh()
hass.data.setdefault(DOMAIN, {})
hass.data[DOMAIN][entry.entry_id] = coordinator
await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS)
return True
async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
"""Unload a config entry."""
unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
if unload_ok:
hass.data[DOMAIN].pop(entry.entry_id)
return unload_ok
Grump. I explicitly told it "no dependencies", and yet it imported dependencies. Fine, fine, it did not behave as I commanded it to, but neither would a Junior. Except the Junior is a person and Claude is a machine, but nevermind, I'll just eat it and move forward.
Bearing in mind that I'm no Python expert, and that this code lacks comments, I think I get the gist, here. Let's start with the imports:
from __future__ import annotations
import logging
from homeassistant.config_entries import ConfigEntry
from homeassistant.const import Platform
from homeassistant.core import HomeAssistant
from .const import DOMAIN
from .coordinator import WeatherAlertsCoordinatorFirst, it imports the annotations component of the future package. I know that vocabulary is wrong, but my brain thinks of modules as equivalent to packages, and functions within that package as a component; in Python terms, this is importing future statements.
Then we've got some more standard stuff: logging, homeassistant.config_entries, homeassistant.const, and homeassistant.core. All pretty straightforward packages in the Home Assistant deployment. Then we get to .coordinator, which thus far hasn't been called anywhere else. That's because it's actually a custom package Claude wrote, which we'll get into momentarily.
Okay, so the imports are at least sensical, minimal, and practical. Claude is forgiven, though I will still side-eye it for ignoring my command instead of challenging it respectfully as a real human would.
Let's continue:
_LOGGER = logging.getLogger(__name__)
PLATFORMS = [Platform.SENSOR]
async def async_setup_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
"""Set up Weather Alerts from a config entry."""
coordinator = WeatherAlertsCoordinator(hass, dict(entry.data))
await coordinator.async_config_entry_first_refresh()
hass.data.setdefault(DOMAIN, {})
hass.data[DOMAIN][entry.entry_id] = coordinator
await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS)
return True
async def async_unload_entry(hass: HomeAssistant, entry: ConfigEntry) -> bool:
"""Unload a config entry."""
unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
if unload_ok:
hass.data[DOMAIN].pop(entry.entry_id)
return unload_okStraightforward enough as well. First it defines the Logger and Platforms variables, then it sets up asynchronous definitions for some Home Assistant ConfigEntries to check them for validity and return boolean results.
I think. I'm a Powershell dinosaur, not a Python one, hence, why I'm reading this code instead of blindly executing it in an effort to get better at Python. Let's see how close I am by asking Claude to explain the file:
The file has three sections: module setup, entry setup, and entry unload.
---
Module setup (lines 1–16)
_LOGGER = logging.getLogger(__name__)
PLATFORMS = [Platform.SENSOR]
Creates a logger scoped to this module and declares which entity platforms this integration provides. When HA loads the integration, it uses PLATFORMS to know it needs to forward setup to sensor.py. Only sensor is listed — if you added binary sensors or switches later, you'd add them here.
---
async_setup_entry (lines 19–28)
Called by HA when a user adds the integration through the UI (after config flow completes) or on startup for existing entries.
coordinator = WeatherAlertsCoordinator(hass, dict(entry.data))
await coordinator.async_config_entry_first_refresh()
Creates the coordinator with the user's config (zip code or lat/lon) and immediately does the first API fetch. async_config_entry_first_refresh is a HA method that raises ConfigEntryNotReady if the first fetch fails — this tells HA to retry later instead of silently loading with no data.
hass.data.setdefault(DOMAIN, {})
hass.data[DOMAIN][entry.entry_id] = coordinator
Stores the coordinator in hass.data, a shared dict that all HA components can access. setdefault ensures the top-level dict exists (in case this is the first instance). Keying by entry.entry_id allows multiple instances of the integration (e.g., monitoring two different locations).
await hass.config_entries.async_forward_entry_setups(entry, PLATFORMS)
Tells HA to call async_setup_entry in sensor.py, which creates the actual sensor entities. This is the standard HA pattern — __init__.py sets up shared resources (coordinator), then delegates entity creation to platform modules.
return True
Signals to HA that setup succeeded.
---
async_unload_entry (lines 31–36)
Called when the user removes the integration or during a config entry reload.
unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
Tells HA to tear down all entities from the sensor platform. Returns False if any platform fails to unload.
if unload_ok:
hass.data[DOMAIN].pop(entry.entry_id)
Only cleans up the coordinator from hass.data if the platform unload succeeded. If it failed, the coordinator stays so HA can retry. This prevents orphaned entities from losing their data source.
return unload_ok
Passes the success/failure status back to HA.I was pretty close! My personal lack of understanding of HA's architecture was the missing component, and why I whiffed on the asynchronous entries. Of course, this is also a big assumption that Claude is right, which it may not be. My knowledge gap in Python specifics is leaving me in a dangerous zone of ambiguity, so I need to make absolutely sure I know what the code is doing in those other packages.
Thus far, this makes sense: the init.py file is basically bootstrapping the add-on for Home Assistant at startup, and setting up routines to make sure the modules successfully load with initial data before moving on to the meat of the add-on. It does what it says on the tin: initializes.
Let's move on to the manifest.json output:
{
"domain": "weather_alerts",
"name": "Weather Alerts",
"codeowners": ["@{LOCAL_MAC_USER}"],
"config_flow": true,
"documentation": "https://github.com/{LOCAL_MAC_USER}/ha-weather-alerts",
"iot_class": "cloud_polling",
"requirements": [],
"version": "1.0.0"
}
(Almost) the raw JSON. {LOCAL_MAC_USER} standing in for my actual macOS username.
This, I can read just fine. It's key-value pairs, and nothing here particularly jumps out at me as erroneous with one tiny exception: the value for documentation cites my local macOS username instead of the actual GitHub URL this will commit to. Something I'll need to change, but not a dealbreaker. If my inference of Home Assistant architecture is correct, these fields basically just tell Home Assistant what key identifiers are; kinda like the various integrations pages in their add-on database. For what it's worth, the other JSON files - the English translations, and the strings.json - are similarly boilerplate: define key, define value, voila.
Let's get back to the Python output, because that's where the action is happening. Let's start with const.py:
"""Constants for the Weather Alerts integration."""
from datetime import timedelta
DOMAIN = "weather_alerts"
# Config keys
CONF_ZIP_CODE = "zip_code"
CONF_USE_COORDINATES = "use_coordinates"
CONF_LATITUDE = "latitude"
CONF_LONGITUDE = "longitude"
# Polling intervals
SCAN_INTERVAL_DEFAULT = timedelta(seconds=300)
SCAN_INTERVAL_SEVERE = timedelta(seconds=30)
# Alert types that trigger severe polling and push notifications
SEVERE_ALERT_TYPES = [
"Tornado Warning",
"Severe Thunderstorm Warning",
]
# NWS API
NWS_API_BASE = "https://api.weather.gov"
NWS_USER_AGENT = "(ha-weather-alerts, github.com/{LOCAL_MAC_USER}/ha-weather-alerts)"
# US Census Bureau Geocoding API (free, no key required)
CENSUS_GEOCODE_URL = (
"https://geocoding.geo.census.gov/geocoder/locations/onelineaddress"
)
This is also highly readable, but to be fair it's because it's all variables, or "constants" in Python's vocabulary. The GitHub URL issue rears its head again, and I'm sure this will be a recurring theme.
Switching gears for a moment: can I say how much I like this approach? Defining variables in a separate file to keep better track of them, and just importing them into the necessary scripts as needed, is honestly brilliant. I'm keeping this trick for my own future scripting needs.
Okay then, that all looks good. Let's check config_flow.py next:
"""Config flow for Weather Alerts integration."""
from __future__ import annotations
import logging
from typing import Any
import voluptuous as vol
from homeassistant.config_entries import ConfigFlow, ConfigFlowResult
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from .const import (
CONF_LATITUDE,
CONF_LONGITUDE,
CONF_ZIP_CODE,
DOMAIN,
NWS_API_BASE,
NWS_USER_AGENT,
)
_LOGGER = logging.getLogger(__name__)
class WeatherAlertsConfigFlow(ConfigFlow, domain=DOMAIN):
"""Handle a config flow for Weather Alerts."""
VERSION = 1
async def async_step_user(
self, user_input: dict[str, Any] | None = None
) -> ConfigFlowResult:
"""Handle the initial step."""
errors: dict[str, str] = {}
if user_input is not None:
zip_code = user_input.get(CONF_ZIP_CODE, "").strip()
latitude = user_input.get(CONF_LATITUDE)
longitude = user_input.get(CONF_LONGITUDE)
if zip_code:
# Validate zip code by attempting geocode
valid, lat, lon = await self._async_validate_zip(zip_code)
if valid:
return self.async_create_entry(
title=f"Weather Alerts ({zip_code})",
data={CONF_ZIP_CODE: zip_code},
)
errors["base"] = "invalid_location"
elif latitude is not None and longitude is not None:
# Validate coordinates by making a test NWS call
valid = await self._async_validate_coordinates(latitude, longitude)
if valid:
return self.async_create_entry(
title=f"Weather Alerts ({latitude}, {longitude})",
data={
CONF_LATITUDE: latitude,
CONF_LONGITUDE: longitude,
},
)
errors["base"] = "invalid_location"
else:
errors["base"] = "no_location"
# Default to HA's configured coordinates
default_lat = self.hass.config.latitude
default_lon = self.hass.config.longitude
return self.async_show_form(
step_id="user",
data_schema=vol.Schema(
{
vol.Optional(CONF_ZIP_CODE, default=""): str,
vol.Optional(
CONF_LATITUDE, default=default_lat
): vol.Coerce(float),
vol.Optional(
CONF_LONGITUDE, default=default_lon
): vol.Coerce(float),
}
),
errors=errors,
)
async def _async_validate_zip(
self, zip_code: str
) -> tuple[bool, float | None, float | None]:
"""Validate a zip code by geocoding it."""
from .const import CENSUS_GEOCODE_URL
session = async_get_clientsession(self.hass)
params = {
"address": zip_code,
"benchmark": "Public_AR_Current",
"format": "json",
}
try:
resp = await session.get(CENSUS_GEOCODE_URL, params=params, timeout=10)
resp.raise_for_status()
data = await resp.json(content_type=None)
matches = data["result"]["addressMatches"]
if not matches:
return False, None, None
coords = matches[0]["coordinates"]
return True, coords["y"], coords["x"]
except Exception:
_LOGGER.exception("Error validating zip code %s", zip_code)
return False, None, None
async def _async_validate_coordinates(
self, lat: float, lon: float
) -> bool:
"""Validate coordinates by making a test NWS API call."""
session = async_get_clientsession(self.hass)
lat_r = round(lat, 4)
lon_r = round(lon, 4)
url = f"{NWS_API_BASE}/points/{lat_r},{lon_r}"
headers = {
"User-Agent": NWS_USER_AGENT,
"Accept": "application/geo+json",
}
try:
resp = await session.get(url, headers=headers, timeout=10)
return resp.status == 200
except Exception:
_LOGGER.exception("Error validating coordinates %s,%s", lat, lon)
return False
A lot going on, here, and all relatively self-explanatory enough even to my newbie eyes. That said, there's a few callouts here worth noting:
- Voluptuous is not maintained anymore, only accepting contributions. This is exactly why I asked Claude to write dependencies and libraries itself, not use others.
- I'm not entirely sure why Typing was imported; Python's variable handling is pretty straightforward on its own. Claude, for what it's worth claims it's to help Home Assistant, and this knowledge gap again leaves me in a precarious position.
It's used for the Any type hint in the async_step_user method signature:
async def async_step_user(
self, user_input: dict[str, Any] | None = None
) -> ConfigFlowResult:
Any comes from typing and is used here because the config flow form values can be strings, floats, or other types depending on the field.Claude's justification for importing the Typing library. I lack the expertise to validate its reason.
Two more files to go. Let's look at the sensor.py file next:
"""Sensor platform for Weather Alerts."""
from __future__ import annotations
from typing import Any
from homeassistant.components.sensor import SensorEntity
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant
from homeassistant.helpers.entity_platform import AddEntitiesCallback
from homeassistant.helpers.update_coordinator import CoordinatorEntity
from .const import DOMAIN, SEVERE_ALERT_TYPES
from .coordinator import WeatherAlertsCoordinator
async def async_setup_entry(
hass: HomeAssistant,
entry: ConfigEntry,
async_add_entities: AddEntitiesCallback,
) -> None:
"""Set up Weather Alerts sensors from a config entry."""
coordinator: WeatherAlertsCoordinator = hass.data[DOMAIN][entry.entry_id]
async_add_entities(
[
WeatherAlertsSensor(coordinator, entry),
WeatherAlertsCountSensor(coordinator, entry),
]
)
class WeatherAlertsSensor(CoordinatorEntity[WeatherAlertsCoordinator], SensorEntity):
"""Main weather alerts sensor showing alert summary."""
_attr_has_entity_name = True
_attr_translation_key = "weather_alerts"
def __init__(
self, coordinator: WeatherAlertsCoordinator, entry: ConfigEntry
) -> None:
"""Initialize the sensor."""
super().__init__(coordinator)
self._attr_unique_id = f"{entry.entry_id}_alerts"
@property
def native_value(self) -> str:
"""Return the state."""
alerts = self._alerts
if not alerts:
return "No alerts"
count = len(alerts)
return f"{count} active alert{'s' if count != 1 else ''}"
@property
def extra_state_attributes(self) -> dict[str, Any]:
"""Return alert details as attributes."""
alerts = self._alerts
return {
"alerts": [
{
"event": a["event"],
"headline": a["headline"],
"description": a["description"],
"severity": a["severity"],
"urgency": a["urgency"],
"certainty": a["certainty"],
"effective": a["effective"],
"expires": a["expires"],
"sender_name": a["sender_name"],
"area_desc": a["area_desc"],
}
for a in alerts
],
"severe_active": any(
a["event"] in SEVERE_ALERT_TYPES for a in alerts
),
}
@property
def icon(self) -> str:
"""Return icon based on alert state."""
alerts = self._alerts
if any(a["event"] in SEVERE_ALERT_TYPES for a in alerts):
return "mdi:weather-lightning"
if alerts:
return "mdi:alert"
return "mdi:shield-check"
@property
def _alerts(self) -> list[dict[str, Any]]:
"""Get alerts from coordinator data."""
if self.coordinator.data is None:
return []
return self.coordinator.data.get("alerts", [])
class WeatherAlertsCountSensor(
CoordinatorEntity[WeatherAlertsCoordinator], SensorEntity
):
"""Numeric sensor with alert count."""
_attr_has_entity_name = True
_attr_translation_key = "weather_alerts_count"
_attr_native_unit_of_measurement = "alerts"
_attr_state_class = "measurement"
def __init__(
self, coordinator: WeatherAlertsCoordinator, entry: ConfigEntry
) -> None:
"""Initialize the sensor."""
super().__init__(coordinator)
self._attr_unique_id = f"{entry.entry_id}_count"
@property
def native_value(self) -> int:
"""Return the alert count."""
if self.coordinator.data is None:
return 0
return len(self.coordinator.data.get("alerts", []))
@property
def icon(self) -> str:
"""Return icon based on count."""
if self.native_value > 0:
return "mdi:alert-circle"
return "mdi:check-circle"
Again, I think I got this. Usual library imports from Home Assistant and Python, usual call to Home Assistant to start a sensor and keep retrying till it has data, and then the meat of the program in the form of two class definitions.
WeatherAlertsSensor looks to be doing the heavy lifting of actually examining the data and changing icons depending on the responses found; this is good, and generally what I asked for. WeatherAlertsCountSensor seems to be handling a count of the alert in question to make sure it's only sent once, and not constantly pinging me every time it refreshes should the data remain the same.
That just leaves coordinator.py:
"""DataUpdateCoordinator for Weather Alerts."""
from __future__ import annotations
import logging
from typing import Any
from homeassistant.core import HomeAssistant
from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed
from .const import (
CENSUS_GEOCODE_URL,
CONF_LATITUDE,
CONF_LONGITUDE,
CONF_ZIP_CODE,
DOMAIN,
NWS_API_BASE,
NWS_USER_AGENT,
SCAN_INTERVAL_DEFAULT,
SCAN_INTERVAL_SEVERE,
SEVERE_ALERT_TYPES,
)
_LOGGER = logging.getLogger(__name__)
class WeatherAlertsCoordinator(DataUpdateCoordinator[dict[str, Any]]):
"""Coordinator that fetches weather alerts from the NWS API."""
def __init__(self, hass: HomeAssistant, config_data: dict[str, Any]) -> None:
"""Initialize the coordinator."""
super().__init__(
hass,
_LOGGER,
name=DOMAIN,
update_interval=SCAN_INTERVAL_DEFAULT,
)
self._config_data = config_data
self._resolved_lat: float | None = None
self._resolved_lon: float | None = None
self._notified_alert_ids: set[str] = set()
async def _async_update_data(self) -> dict[str, Any]:
"""Fetch alerts from NWS API."""
lat, lon = await self._async_resolve_location()
alerts = await self._async_fetch_alerts(lat, lon)
self._adjust_polling_interval(alerts)
await self._async_send_notifications(alerts)
return {"alerts": alerts, "latitude": lat, "longitude": lon}
async def _async_resolve_location(self) -> tuple[float, float]:
"""Resolve location to lat/lon coordinates."""
if self._resolved_lat is not None and self._resolved_lon is not None:
return self._resolved_lat, self._resolved_lon
if CONF_ZIP_CODE in self._config_data and self._config_data[CONF_ZIP_CODE]:
lat, lon = await self._async_geocode_zip(self._config_data[CONF_ZIP_CODE])
else:
lat = self._config_data[CONF_LATITUDE]
lon = self._config_data[CONF_LONGITUDE]
# NWS requires coordinates rounded to 4 decimal places
self._resolved_lat = round(float(lat), 4)
self._resolved_lon = round(float(lon), 4)
return self._resolved_lat, self._resolved_lon
async def _async_geocode_zip(self, zip_code: str) -> tuple[float, float]:
"""Geocode a zip code using the US Census Bureau API."""
session = async_get_clientsession(self.hass)
params = {
"address": zip_code,
"benchmark": "Public_AR_Current",
"format": "json",
}
try:
resp = await session.get(CENSUS_GEOCODE_URL, params=params, timeout=10)
resp.raise_for_status()
data = await resp.json(content_type=None)
matches = data["result"]["addressMatches"]
if not matches:
raise UpdateFailed(
f"Could not geocode zip code {zip_code}: no matches found"
)
coords = matches[0]["coordinates"]
return coords["y"], coords["x"]
except (KeyError, IndexError) as err:
raise UpdateFailed(
f"Unexpected response from geocoding API for {zip_code}"
) from err
except Exception as err:
if isinstance(err, UpdateFailed):
raise
raise UpdateFailed(
f"Error geocoding zip code {zip_code}: {err}"
) from err
async def _async_fetch_alerts(
self, lat: float, lon: float
) -> list[dict[str, Any]]:
"""Fetch active alerts from the NWS API for the given coordinates."""
session = async_get_clientsession(self.hass)
url = f"{NWS_API_BASE}/alerts/active?point={lat},{lon}"
headers = {
"User-Agent": NWS_USER_AGENT,
"Accept": "application/geo+json",
}
try:
resp = await session.get(url, headers=headers, timeout=10)
resp.raise_for_status()
data = await resp.json()
except Exception as err:
raise UpdateFailed(f"Error fetching alerts from NWS: {err}") from err
alerts = []
for feature in data.get("features", []):
props = feature.get("properties", {})
alerts.append(
{
"id": props.get("id", ""),
"event": props.get("event", "Unknown"),
"headline": props.get("headline", ""),
"description": props.get("description", ""),
"severity": props.get("severity", "Unknown"),
"certainty": props.get("certainty", "Unknown"),
"urgency": props.get("urgency", "Unknown"),
"effective": props.get("effective", ""),
"expires": props.get("expires", ""),
"sender_name": props.get("senderName", ""),
"area_desc": props.get("areaDesc", ""),
}
)
return alerts
def _adjust_polling_interval(self, alerts: list[dict[str, Any]]) -> None:
"""Adjust polling interval based on alert severity."""
has_severe = any(
alert["event"] in SEVERE_ALERT_TYPES for alert in alerts
)
new_interval = SCAN_INTERVAL_SEVERE if has_severe else SCAN_INTERVAL_DEFAULT
if self.update_interval != new_interval:
_LOGGER.info(
"Adjusting polling interval to %s (severe alerts: %s)",
new_interval,
has_severe,
)
self.update_interval = new_interval
async def _async_send_notifications(
self, alerts: list[dict[str, Any]]
) -> None:
"""Send push notifications for severe alerts."""
# Clean up old alert IDs that are no longer active
active_ids = {alert["id"] for alert in alerts}
self._notified_alert_ids &= active_ids
for alert in alerts:
if alert["event"] not in SEVERE_ALERT_TYPES:
continue
if alert["id"] in self._notified_alert_ids:
continue
self._notified_alert_ids.add(alert["id"])
_LOGGER.info("Sending notification for alert: %s", alert["headline"])
try:
await self.hass.services.async_call(
"notify",
"notify",
{
"title": alert["event"],
"message": alert["headline"]
or f"{alert['event']} for {alert['area_desc']}",
"data": {
"push": {
"sound": {
"name": "default",
"critical": 1,
"volume": 1.0,
}
},
"ttl": 0,
"priority": "high",
},
},
)
except Exception:
_LOGGER.exception("Failed to send notification for %s", alert["event"])
I recognize this logic, at least on the surface: it's what I already built sans the Census Geocode lookup for zip code translation into GEO coordinates (which is honestly brilliant; I never thought of it and just dug up GEO coordinates manually).
Here, I'll share proof in the form of my current HA weather alert system I setup long before Claude Code:
sensor:
- platform: rest
resource: https://api.weather.gov/alerts/active/zone/{ZONE_ID}
name: {NAME, STATE} Weather Alerts - {ZONE_ID}
value_template: >
{% if value_json.features[0] is defined %}
{{ value_json['features'][0]['properties'].event }}
{% else %}
No Weather Alerts
{% endif %}
json_attributes:
- features
headers:
User-Agent: {PERSONAL_EMAIL}
Accept: application/geo+json
scan_interval: 60Feel free to copy and paste it if you like, it's just a basic sensor config that looks like this on my HA GUI:

So from my point of view, this looks like a lot of Python to automate something I wrote in a few lines of YAML. To be fair, there's also more logic involved with identifying the zone for NWS API calls, alerting, and (allegedly) sending Push Notifications, which my code does not have.
All that's left is to make the necessary changes, commit the repo, and then install on my Home Assistant - by hand, because like hell I'm letting Claude touch my stuff directly.
Does it work?
I mean, kinda!

Alright then, let's ask Claude to debug this issue. After doing its thing, it spits out the following artifacts and root cause analysis:
Working correctly — valid zips return coordinates, invalid zips return 404.
Root cause: The Census Bureau geocoding API (/geocoder/locations/onelineaddress) requires a full street address, not just a zip code. A bare zip code like "90210" returns zero addressMatches.
Fix: Replaced the Census Bureau API with https://api.zippopotam.us, which is purpose-built for zip-to-coordinate lookups. It's free, requires no API key, and returns a clean 404 for invalid zips.
Files changed:
- const.py — replaced CENSUS_GEOCODE_URL with ZIPPOPOTAM_URL
- coordinator.py — rewrote _async_geocode_zip to parse Zippopotam's places[0].latitude/longitude response
- config_flow.py — rewrote _async_validate_zip to use the same Zippopotam API...and update the code on Home Assistant accordingly, test...

I can see why the rock-bangers love this thing. It's push-button-get-software. A years'-long side project is ticked off my list and out of my head.
...I want to do more.
Let's add a test function!
Everything looks correct. Here's what was added:
Files changed:
- services.yaml (new) — Defines the weather_alerts.test_alert service so it appears in Developer Tools > Services
- coordinator.py — Added _test_alert field and two methods:
- async_inject_test_alert() — Creates a fake Tornado Warning with a unique TEST- prefixed ID, "THIS IS ONLY A TEST" in the headline/description,
sender name "Weather Alerts Integration (TEST)", and a 60-second expiry. Triggers an immediate coordinator refresh.
- async_clear_test_alert() — Manually clears the test alert before expiry if needed
- _async_update_data() — Now checks for an active test alert and prepends it to real alerts if not expired, or auto-clears it
- __init__.py — Registers the weather_alerts.test_alert service on entry setup, removes it when the last entry unloads
How it works:
1. Call weather_alerts.test_alert from Developer Tools > Services
2. A fake Tornado Warning is injected into all coordinator instances
3. The coordinator immediately refreshes, which:
- Updates the sensor to show "1 active alert" (or +1 if real alerts exist)
- Switches polling to 30-second severe interval
- Fires a push notification with "TEST ALERT - Tornado Warning - THIS IS ONLY A TEST"
4. After 60 seconds, the test alert auto-expires on the next poll cycle and everything returns to normalAnd now, let's test the test!

It worked.
Pulled lever. Got software.
Fuck.
Final Thoughts and Observations
Man, I get why the proponents of this thing say software developers are cooked. I pulled a lever on a slot machine and got exactly what I had in my head for over a year now, functional on (largely) the first shot with minimal iterations needed.
Here's the wrinkle, though: I'm done. I have the thing, and it's written well enough that I can debug it myself at a later date. I do not need a Claude Code subscription now that the task is complete. I suspect the AI companies know this, but their investors definitely do not, given the investment pouring in. There's this expectation that AI will be this recurring revenue stream forever, that these data center buildouts will generate huge returns.
I call sus.

For software consumers, we generally just need one-off solutions to our niche problems. This is one such solution to one such problem, and it's built in such a way that the easy part - maintenance - can be achieved without fully understanding how to build it from scratch yourself. If the NWS API goes down, I can simply point to a new one and parse the payload into the existing structure; if Zippopotamus goes down, I can move to a different API elsewhere - or just drop that automation entirely in favor of strict lat/long coordinates.
This has been the "secret sauce" of IT for decades: we don't build the software, we just maintain it ad infinitum. Consumers won't be any different, which is why these companies are trying to position the tools as "answer engines" or "research aides" to promote continuous use and subscription revenue. As someone with basic research competency skills who enjoys learning, I will never need a tool in that capacity, and if I ever do, I can almost certainly run it locally instead, free of charge. I realize I'm the outlier here, but I do wonder what the actual market for subscription AI is in the long run - and I have a feeling it's "less than you think, especially when models can execute locally".
For business and enterprise use cases, that market is far larger - for now. The same rule applies, though: once the business has the thing, and they can maintain the thing forever, then there's no more need to pay for the thing. That might be CRM, it might be ERP, it might be HRIS. SaaS had the moat of standard platforms, workflows, codebases, and transferrable skills that made the recurring revenue a cash cow; post-AI, their models are under dire threat. AI's own moat is AI itself, something that's becoming increasingly optimized for local or edge inference and who won't need monthly subs or API budgets by 2030, barring a profound revolution in the field I can't possibly foresee.
Thus the pivot to agents, little "subscriptions" masquerading as autonomous widgets that can repeat tasks and do things forever. Except again, so can software, with the benefit of only needing to be written once and maintained forever. While agent swarms have legitimate uses for specific projects, anything that's remotely repeatable should be hard-coded into software that can be modernized, updated, and maintained long into the future; there's no reason to use an agent where a script can accomplish the same goals.
And that brings me back to Claude Code, this honestly kind of magical little Terminal buddy who kicked off a side project that'd been in my backlog for a year, aced it in one go, and disappeared into the aether afterward. I'm done. I have the thing. As much as Anthropic wants my money, I don't need to give them any more than I have to get value from their product. I got what I needed, business concluded. It's not perfect, it has weird quirks around versioning, it doesn't listen to requirements, it makes shit up...
...but it still works. Just enough to get the thing, and move on with my day.
And that's pretty damn incredible.
Sharing the Work
If this integration is of genuine interest to you, I'll be dressing up the README.md and making the GitHub repo public. The license is MIT, because A) I got the thing and don't intend to support it further, and B) since AI is predicated on widescale theft, I believe any output from it can never be subject to copyright, trademark, or patent protections - it should be free, and shared, and I will lead by that example by sharing what I've vibe-coded.
...ew. Vibe coding.
