* Main Components:
1. Project Management
Create and manage raffle projects (per user)
Example: Father's Day Raffle
Each project can have multiple raffle sets
2. Raffle System
RaffleSet: These can be raffles sold virtually or raffles sold on paper (not the same as the payment method)
Raffle: Individual numbers within each set
Management of number ranges, unit prices
Statuses: available, paid, reserved
3. Buyer Management
Buyer registration with personal data
Email and phone validation
Purchase history
4. JWT Authentication System
User login/registration
JWT tokens to protect endpoints
Bearer token-based authentication
5. MySQL/MariaDB Database
Relational structure with SQLAlchemy
Tables: projects, rafflesets, raffles, buyers, users
Relationships between entities
* Typical Usage Flow
Administrator registers/logs in
Creates a project (e.g., "Scout Camp Raffle")
Within the project, create raffle sets (e.g., "First Prize", "Second Prize"; it can be a completely different name; prizes are not necessarily separated by raffle set)
Numbers are automatically generated based on the requested quantity
Buyers register and purchase numbers
Numbers can be marked as paid
System ready for raffles (returns sold numbers for raffle draws easily!)
* Technologies
FastAPI (REST API)
SQLAlchemy (ORM)
MySQL/MariaDB (Database)
JWT (Authentication)
Pydantic (Data Validation)
bcrypt (Password Hashing)
No followers yet
Once you ship this you can't edit the description of the project, but you'll be able to add more devlogs and re-ship it as you add new features!
Devlog 4: Big System Review and Architecture Improvements
I have focused on user isolation, database reliability, optimal and efficient code. Changes introduced: primary keys,
functions, good database initialization, and timezone-aware authentication systems.
🏗️ Architectural Changes
1. Database Schema Revolution: Composite Primary Keys
Problem Solved: Original system used global auto-increment IDs, had security issues, lack of user isolation.
Solution Implemented: Composite primary key architecture
Benefits Achieved:
- Good User Isolation: Each user sees their entities numbered from 1
- Security Enhancement: Prevents user enumeration and information leakage
- Predictable URLs: /buyer/1
always refers to the user's first buyer
- Scalable Architecture: Each user operates in their own numbering space
2. Universal Helper Functions System
Problem Solved: Repetitive database operations across different models with inconsistent patterns and error handling.
Solution Implemented: Universal functions in routes/__init__.py
Benefits:
- Code Reusability: Single functions handle all models consistently
- Error Handling: Centralized exception management and HTTP responses
- Maintainability: Changes to database logic only need to be made in one place
- Type Safety: Proper parameter validation and return types
3. Advanced Raffle Filtering
Problem Solved: Limited ability to filter and search raffles for drawings and sales management.
Solution Implemented: Enhanced raffle filtering system
Benefits Achieved:
- Sales Management: Quick identification of available/sold raffles
- Performance: Efficient querying with proper indexing
- Flexibility: Multiple filter combinations for different use cases
4. Robust Database Initialization System
Problem Solved: Application failures when database doesn't exist, inconsistent table creation across environments.
Solution Implemented: Intelligent database initialization
Features Implemented:
- Environment Detection: Different behavior for local vs. production
- Graceful Failure Handling: Continues operation even with initialization warnings
- Multiple Creation Strategies: SQLAlchemy metadata + SQL fallback
- Verification System: Always confirms all required tables exist
5. JWT Authentication Timezone Fix
Problem Solved: JWT tokens were immediately expiring due to timezone mismatches between token creation (local time) and validation (UTC time).
Solution Implemented: Timezone-aware token creation
Benefits Achieved:
- Reliable Authentication: Tokens work correctly
- User Experience: No unexpected authentication failures
🧰 Technical Improvements
Code Quality and Maintainability
1. Consolidated Error Handling: Centralized exception management across all routes
2. Consistent API Patterns: All endpoints follow the same structure and response format
Performance Optimizations
1. Efficient Querying: Optimized database queries with proper filtering and indexing
2. Batch Operations: Bulk raffle creation for raffle sets
3. Connection Pooling: Proper database connection management
Development Experience
1. Zero Configuration Setup: Database and tables created automatically
2. Clear Error Messages: Detailed feedback for troubleshooting
3. Environment Flexibility: Works seamlessly in local and production environments
4. Comprehensive Testing: All major functions include error handling and validation
Mainly, there was a massive bug fix, and I'm constantly learning, trying to create something I've never done before. It really takes me a long time to understand what I'm doing. I run queries with chatgpt and GitHub Copilot.
I optimized the route code because it was very repetitive. Now, with get_record()
, get_records()
, create_record()
, update_record
, and delete_record()
, code redundancy is avoided.
I modified many things. I added a core that centralizes the configuration (with .env files for added security; these are good practices). In the same core, MySQL is set up, its existence is verified, and the .env user is searched for (within the same .env server). If it doesn't exist, it is created. The init file generates the initial route to generate an instance without a database.
Subsequently, in the database folder, a function is called within create.py that verifies the database and, if it doesn't exist, creates the database, creating an instance of it in connection.py. Getdb is accessed through routes.
The models folder contains the structure of each table in the database.
The routes folder contains all the functions called when accessing each route. Typically, each function has a schema as a parameter and a dependency on getdb (the instance generator). To make requests to the database, you need the table model.
I also implemented JWT authentication (which I had to upgrade to a different version because it was outdated). At first, I opted to watch a tutorial, but Github Copilot helped me finish implementing it because I was having trouble understanding it.
Auth/utils.py contains the utilities for creating tokens and hashing passwords. In another folder is the Pydantic model for the tokens, and in another is the part that communicates with the database related to the users who would use the API.
The auth routes folder calls the service and utilities in the auth folder.
What took me the longest was fixing bugs, adding exception handling, and data validation.
This entire project allowed me to learn things that I'm sure could help me in the future at work and in other projects, such as a sales manager for a shoe store, for example.
In the future, I'm interested in adding a frontend that generates requests using forms. I think it's better for demonstration purposes.
I want to clarify a couple of things. The database didn't take me 11 hours; I reached the limit where I had to devlog, and I wanted to do a devlog separating progress, but there was a limit. Well, I'm new to devlogging and the technologies I'm implementing, which is why my devlogs are so late and strange.
Creating the REST API in Python
The technologies used are: Fast-API, pydantic for the Fast-API schemas, and sqlalchemy for the database models (I learned that it's like translating the database into Python objects).
Database Configuration in the Project
In config/database.py, I create setup_mysql():
I verify that MySQL and MariaDB Server are working correctly.
I check if the specific raffles-manager user exists.
If it doesn't, I create it.
In config/__init_.py:
I import "createengine" and the setup_mysql()
function.
I check if the user was created with the function I imported.
If it was created, I then create the engine that allows me to connect to MySQL with my user (still without a database).
In database/create.py:
I created a function that checks if the database exists (for greater scalability).
I created a function that creates the database (from a structure.sql in the same directory) and returns True when created successfully. It includes exception handling.
In database/connection.py:
Create an engine that now points to the database.
Implement a get_db()
function for the endpoints.
In database/__init_.py:
Import "createdatabase()"
Check with the same function that was created
If it was created, import engine, SessionLocal, and Base.
In models/:
There is one .py file for each table in the database. Within each file, there is a class with the table name and its attributes.
Configuring routes in connection with the database
In schemas/:
There is a .py file for each table in the database. Within each file, there is a class for each endpoint with the name of the endpoint's functionality and the attributes it needs to take (I learned that these don't necessarily have to be stored in the database).
In routes/:
There is a .py file for each table in the database. Within each file, functions are defined. They have one or two decorators above them that specify the http method and the name of the route that executes it. Each file handles the display of the stored data (in bulk or for each record using the Primary Key and, in some cases, a pair of unique values).
Many functions are repeated; I am in the process of optimizing the code to make it more efficient.
In routes/__init_.py:
I created the `getrecord()` function to optimize the code. Includes exception handling and retrieves a record using the Primary Key.
In main.py:
I enable CORS (since without it, the endpoint tests wouldn't work).
I include the endpoint paths with a tag used for the automatic documentation generated by FastAPI in /docs.
Use of Artificial Intelligence
As I mentioned before, I didn't know how to create an API. I've been using Python for four years, but I'd never worked with FastAPI, SQLAlchemy, or Pydantic. I used chatgpt in the new Study and Learn mode, specifically asked it to meet the endpoint requirements (including JWS authentication, which I still need to do), and asked all my questions. I also used Github Copilot to resolve some bugs I didn't understand and to optimize the code a bit and make it cleaner, as I was using different practices in different files (different indentations and things like mixing camel case and snake case in different places).
ENDPOINTER TESTS
I tested many requests on different endpoints using different methods. I'm afraid I kept modifying the code, so I can't guarantee they'll still work the same. I'm still learning how to improve version management, devlogs, best practices, etc.
(for example, you can have a Father's Day raffle and a different Mother's Day raffle).
In each project, you can have more than one set of raffles.
Each set contains the unit price of each raffle and its type, whether virtual or physical (paper) raffles.
Each raffle references its set, its buyer, and of course, its number, which is unique and non-nullable (thus used as the Primary Key).
It also contains the payment method and the status, which, upon making a payment, can be changed to sold or reserved if the payment is incomplete.
The payment date is also stored.
This allows for people with the same name but not the same number.
In turn, if, for example, a son wants to use his mother's phone number after making a purchase, he can use it but in the son's name. The same number is called, but the winner is the son.