SQLer Field Guide ★
SQLite-backed Pydantic models that store JSON, resolve relationships, and ship usable data stores in minutes.
JSON Models That Actually Work
Another kaizen. Twenty minutes to show something real. ORMs and migrations would outlast the meeting. SQLer was the notebook that shipped: Pydantic models backed by JSON on SQLite, with queries that read like Python and resolve references without ceremony.
TL;DR: SQLer gives you Pydantic models with JSON storage, fluent filters over nested data, relationship resolution, optimistic locking, bulk upserts, and JSON index helpers. Built on SQLite. Great when you need working storage in minutes, not hours.
The First Save: What SQLer Looks Like
Picture an empty SQLite database. Teach it about Japanese prefectures, then query them like Python objects.
from sqler import SQLerDB, SQLerModel
from sqler.query import SQLerField as F
class Prefecture(SQLerModel):
name: str
region: str
population: int
foods: list[str] | None = None
class City(SQLerModel):
name: str
population: int
prefecture: Prefecture | None = None
# In-memory database for quick experiments
db = SQLerDB.in_memory()
Prefecture.set_db(db)
City.set_db(db)
# Create some data
kyoto = Prefecture(name="Kyoto", region="Kansai", population=2_585_000, foods=["matcha", "yudofu"]).save()
osaka = Prefecture(name="Osaka", region="Kansai", population=8_839_000, foods=["takoyaki"]).save()
# Query like you think
big_prefectures = (
Prefecture.query()
.filter(F("population") > 1_000_000)
.order_by("population", desc=True)
.all()
)
assert [p.name for p in big_prefectures][:2] == ["Osaka", "Kyoto"]
Three things to notice:
- Models are Pydantic: type hints, validation, and defaults work as expected.
- Persistence is explicit:
.save()returnsselfwith_idattached; you choose when to commit. - Queries feel like Python:
F("population") > 1_000_000compiles to JSON-friendly SQL.
Tests: These examples are covered by the contract suite (see
tests/test_readme.pyandtests/sync/**).
Sync or Async? Both.
You don’t need to choose upfront. SQLer ships parallel APIs.
import asyncio
from sqler import AsyncSQLerDB, AsyncSQLerModel
from sqler.query import SQLerField as F
class AUser(AsyncSQLerModel):
name: str
age: int
async def main():
db = AsyncSQLerDB.in_memory()
await db.connect()
AUser.set_db(db)
await AUser(name="Ada", age=36).save()
adults = await AUser.query().filter(F("age") >= 18).order_by("age").all()
assert any(u.name == "Ada" for u in adults)
await db.close()
asyncio.run(main())
The async stack mirrors sync semantics. Prototype in synchronous code; migrate to FastAPI or other asyncio stacks without rewrites.
Array Queries That Don’t Make You Cry
Filtering arrays of objects shouldn’t require ritual SQL. Use .any().where(...) to scope conditions inside array elements.
from sqler import SQLerDB, SQLerModel
from sqler.query import SQLerField as F
class Order(SQLerModel):
customer: str
items: list[dict] | None = None
db = SQLerDB.in_memory()
Order.set_db(db)
# Seed
Order(customer="Customer1", items=[{"sku": "RamenSet", "qty": 3}, {"sku": "Gyoza", "qty": 1}]).save()
Order(customer="Customer2", items=[{"sku": "RamenSet", "qty": 1}]).save()
# Find orders with ramen qty >= 2
ramen_lovers = (
Order.query()
.filter(F(["items"]).any().where((F("sku") == "RamenSet") & (F("qty") >= 2)))
.all()
)
assert [o.customer for o in ramen_lovers] == ["Customer1"]
Containment, membership, exclusions, and introspection are first-class.
from sqler import SQLerDB, SQLerModel
from sqler.query import SQLerField as F
class QueryUser(SQLerModel):
name: str
age: int
tags: list[str] | None = None
tier: int | None = None
class QueryOrder(SQLerModel):
customer: str
items: list[dict] | None = None
db = SQLerDB.in_memory()
QueryUser.set_db(db)
QueryOrder.set_db(db)
QueryUser(name="Ada", age=36, tags=["pro", "python"], tier=1).save()
QueryUser(name="Bob", age=20, tags=["hobby"], tier=3).save()
QueryOrder(customer="Ada", items=[{"sku": "ABC", "qty": 3}]).save()
QueryOrder(customer="Bob", items=[{"sku": "XYZ", "qty": 1}]).save()
pros = QueryUser.query().filter(F("tags").contains("pro")).all()
assert [u.name for u in pros] == ["Ada"]
premium = QueryUser.query().filter(F("tier").isin([1, 2])).all()
assert [u.name for u in premium] == ["Ada"]
real_users = QueryUser.query().exclude(F("name").like("test%")).all()
assert {u.name for u in real_users} == {"Ada", "Bob"}
expr = F(["items"]).any().where((F("sku") == "ABC") & (F("qty") >= 2))
assert [o.customer for o in QueryOrder.query().filter(expr).all()] == ["Ada"]
sql, params = QueryUser.query().filter(F("age") >= 18).debug()
assert isinstance(sql, str) and params == [18]
Notes:
Model.query()exposes.sql()/.params()methods and.debug().- The lower-level
SQLerQueryexposes.sql/.paramsproperties. explain_query_plan(adapter)is available when you need to peek at SQLite’s plan.
References Without the Ceremony
References are stored in JSON like { "_table": "addresses", "_id": 1 } and hydrate to model instances on load.
from sqler import SQLerDB, SQLerModel
class Address(SQLerModel):
city: str
country: str
class User(SQLerModel):
name: str
address: Address | None = None
db = SQLerDB.in_memory()
Address.set_db(db)
User.set_db(db)
home = Address(city="Kyoto", country="JP").save()
user = User(name="Alice", address=home).save()
loaded = User.from_id(user._id)
assert loaded.address.city == "Kyoto"
kyoto_users = User.query().filter(User.ref("address").field("city") == "Kyoto").all()
assert [u.name for u in kyoto_users] == ["Alice"]
Optimistic Locking for the Paranoid
When multiple processes might modify the same record, detect conflicts with SQLerSafeModel and _version.
from sqler import SQLerDB, SQLerSafeModel, StaleVersionError
class Account(SQLerSafeModel):
owner: str
balance: int
db = SQLerDB.in_memory()
Account.set_db(db)
account = Account(owner="Ada", balance=100).save()
account.balance = 120
account.save() # increments version
# Simulate a concurrent version bump
db.adapter.execute(
"UPDATE accounts SET data = json_set(data,'$._version', json_extract(data,'$._version') + 1) WHERE _id = ?",
[account._id],
)
db.adapter.commit()
# Now our copy is stale
account.balance = 130
try:
account.save()
except StaleVersionError:
account.refresh()
account.balance = 130
account.save()
FastAPI mapping to 409 without requiring FastAPI at install time:
try:
from fastapi import HTTPException
except ImportError: # docs/CI fallback
class HTTPException(Exception):
def __init__(self, status_code: int, detail: str):
self.status_code = status_code
self.detail = detail
from sqler.models import StaleVersionError
try:
account.save()
except StaleVersionError:
raise HTTPException(409, "Version conflict; refresh and retry")
Referential Integrity Without Database Constraints
Deletion policies keep references honest:
restrict: block delete when referencedset_null: clear referencing fields before deletecascade: delete dependents recursively
from sqler import SQLerDB, SQLerModel
from sqler.models import ReferentialIntegrityError
class DIUser(SQLerModel):
name: str
class Post(SQLerModel):
title: str
author: dict | None = None
# restrict — block when something still points at the row
restrict_db = SQLerDB.in_memory()
DIUser.set_db(restrict_db)
Post.set_db(restrict_db)
writer = DIUser(name="Writer").save()
Post(title="Post A", author={"_table": "diusers", "_id": writer._id}).save()
try:
writer.delete_with_policy(on_delete="restrict")
assert False, "expected ReferentialIntegrityError"
except ReferentialIntegrityError:
pass
# set_null — null the JSON before deleting
set_null_db = SQLerDB.in_memory()
DIUser.set_db(set_null_db)
Post.set_db(set_null_db)
nullable = DIUser(name="Nullable").save()
post = Post(title="Post B", author={"_table": "diusers", "_id": nullable._id}).save()
nullable.delete_with_policy(on_delete="set_null")
assert Post.from_id(post._id).author is None
# cascade — delete dependents
cascade_db = SQLerDB.in_memory()
DIUser.set_db(cascade_db)
Post.set_db(cascade_db)
cascade = DIUser(name="Cascade").save()
Post(title="Post C", author={"_table": "diusers", "_id": cascade._id}).save()
cascade.delete_with_policy(on_delete="cascade")
assert Post.query().count() == 0
Find any orphans that slipped through:
class RefUser(SQLerModel):
name: str
class RefPost(SQLerModel):
title: str
author: dict | None = None
vdb = SQLerDB.in_memory()
RefUser.set_db(vdb)
RefPost.set_db(vdb)
u = RefUser(name="Ada").save()
dangling = RefPost(title="Lost", author={"_table": RefUser.__tablename__, "_id": u._id}).save()
vdb.delete_document(RefUser.__tablename__, u._id)
broken = RefPost.validate_references()
# sqler.models.BrokenRef objects
assert broken and broken[0].row_id == dangling._id
Bulk Operations and SQL Escape Hatches
Sometimes you need to insert a lot of rows; sometimes you need raw SQL.
Bulk upserts preserve provided IDs and return all resulting IDs in order.
from sqler import SQLerDB, SQLerModel
class BulkUser(SQLerModel):
name: str
age: int | None = None
db = SQLerDB.in_memory()
BulkUser.set_db(db)
rows = [{"name": "Alice"}, {"name": "Bob"}, {"_id": 999, "name": "Carol"}]
ids = db.bulk_upsert(BulkUser.__tablename__, rows)
assert len(ids) == 3 and 999 in ids
Raw SQL with decoded rows:
class ReportUser(SQLerModel):
name: str
email: str | None = None
rdb = SQLerDB.in_memory()
ReportUser.set_db(rdb)
ReportUser(name="Ada", email="ada@example.com").save()
ReportUser(name="Bob", email="bob@example.com").save()
rows = rdb.execute_sql(
"""
SELECT u._id, u.data
FROM reportusers u
WHERE json_extract(u.data,'$.name') LIKE ?
""",
["A%"],
)
# execute_sql returns a list of dicts with JSON merged:
# e.g. {"_id": 1, "name": "Ada", "email": "ada@example.com"}
assert rows and rows[0]["_id"] == 1 and "data" not in rows[0]
Indexes on JSON fields for performance:
# Simple field index
db.create_index("users", "age")
# Unique constraint
db.create_index("users", "email", unique=True)
# Partial index with WHERE clause
db.create_index("users", "age", where="json_extract(data,'$.age') IS NOT NULL")
# Nested field index
db.create_index("users", "address.city")
These compile to SQLite’s JSON1 expressions (json_extract(...)), so they can actually help the query planner.
Deployment Notes
- Connections: manage them at the application layer; enable WAL for better concurrency on SQLite.
- Performance: JSON queries can be slower than normalized SQL. Use indexes on hot paths. Optimize after you’ve proven the shape of the data.
- Schema evolution: adding fields is easy; removing or changing types may require clean-up scripts.
- Testing: the contract suite (
tests/test_readme.pyand friends) keeps examples and behavior in lockstep.
What’s Next?
Install with uv add sqler, create a model, and save a row. Run the tests with uv run pytest -q to see the examples exercised. When a query surprises you, .debug() shows the generated SQL and parameters.
SQLer is for the moments where structured data needs to exist now, and your model is still moving. It favors clear conversations with the database over cleverness, and it tries to make the common path feel obvious. Build something small and see where it bends.
日本語版: このプロジェクトを日本語で読む