17๐
I used the event listener to prepopulate database with data upon creation of a table.
Letโs say you have ProductType
model in your code:
from sqlalchemy import event, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class ProductType(Base):
__tablename__ = 'product_type'
id = Column(Integer, primary_key=True)
name = Column(String(100))
First, you need to define a callback function, which will be executed when the table is created:
def insert_data(target, connection, **kw):
connection.execute(target.insert(), {'id': 1, 'name':'spam'}, {'id':2, 'name': 'eggs'})
Then you just add the event listener:
event.listen(ProductType.__table__, 'after_create', insert_data)
2๐
The short answer is no, SQLAlchemy doesnโt provide the same feature as dumpdata and loaddata like Django.
There is https://github.com/kvesteri/sqlalchemy-fixtures that might be useful for you but the workflow is different.
- How do i configure the django rest framework pagination url
- Difference between django-webtest and selenium
0๐
Background
I am migrating one of my smaller projects from Django
to FastAPI
. I have static data loaded into MySQL
using Django
fixtures and I wanted to maintain that functionality with FastAPI
as well.
This basic python script works for me (at least for now), but you may modify it according to your needs.
Folder Structure
- fixtures
- data
- table_name.json
- table_name2.json
- main.py
You can have as many .json
files here. The script reads all the files under this directory, and performs a functionality equivalent to loaddata
in Django.
Fixtures File
[
{
"id": int,
"name": str
}
]
Every json object here should translate to an insert
query in MySQL
. For example: if name
is a required field, you must have it here.
Complete Example
from json import loads
from os import listdir
from os.path import dirname
from sqlalchemy import create_engine, delete, insert, Table
from sqlalchemy.schema import MetaData
fixtures_directory: str = dirname(__file__) + "/data/"
engine = create_engine("sqlite:///your_database_path", connect_args={"check_same_thread": False})
conn = engine.connect().execution_options(autocommit=True)
meta = MetaData()
meta.reflect(bind=engine)
def close_connection():
conn.commit()
conn.close()
def read_file(file_path: str):
return loads(open(file_path, "r").read())
def insert_data(fixtures: list, table_name: str):
table = Table(table_name, meta, autoload=True)
conn.execute(delete(table))
for item in fixtures:
conn.execute(insert(table).values(item))
def get_file_paths():
return listdir(fixtures_directory)
def load_fixtures():
file_paths: list = get_file_paths()
for file_path in file_paths:
fixtures: list = read_file(file_path=fixtures_directory + file_path)
insert_data(fixtures=fixtures, table_name=file_path.removesuffix(".json"))
close_connection()
load_fixtures()
Since this is a standalone script, you need to ensure there are no relative imports here.
Why am I truncating the table?
conn.execute(delete(table))
Deleting all the rows in the table to insert newer rows. I only care about the static data in the table and no update operations take place here. You need to ensure your insert command will not fail with IntegrityContraint
error. If you wish to update the data, you may use .merge()
instead.
Do not forget!
conn.commit()