Bio

github:headblockhead

Stats

2
Projects
36
Devlogs
0
Votes
1
Ships

Coding Time

All Time: 202h 36m
Today: 0h 0m

Member Since

June 16, 2025

Badges

1
🚢
Maiden Voyage
you shipped your first project! the journey begins...

Projects

2
edwardh.dev
4 devlogs • about 2 months ago
Rail Regard
32 devlogs • 3 months ago

Activity

Earned sticker

First devlog didn't have enough time behind it for the cool sticker, so I decided to put in a few extra minutes to get the sticker. I ended up getting sidetracked on figuring out how to get the sequence number of a schedule location based on the times that the train visits the stop (still haven't verified if it works yet tho, all of this is very very very hacky and experimental, and not very optimised - i'm just trying to throw stuff together in time for the SoM project deadline in 3-4 days!)

Update attachment

Didn't do much at all today, was feeling a little ill.
I just added a structure to store associations between trains into the database.

Update attachment
Earned sticker

Woo!
Today I did a lot more work than normal, and now I can parse and insert a timetable into my database!!
Unfortunately, this means inserting OVER 1 MILLION ROWS per timetable, which takes about 2 minutes using individual insert statements :(
PostgreSQL has a 'copy' function to load from files, which the library I'm using supports, so I now need to figure out a way to use this to copy stuff quickly without changing the structure of my project too much. I'm running out of time!

Update attachment
Earned sticker

Today I did a lot more work on generating SQL statements, and I managed to set up functions to write select, insert, update, and delete statements based on structures, and also to use PostgreSQL's copy functionality to speed up inserting thousands of rows at once down to a single operation.

It's a bit of a mess with lots of usage of the 'reflection' standard library, which is a useful low-level tool to deal with types of variables, but risks panics at runtime if it's not used right. Reflection is also pretty slow from what I've heard, but I'd rather make the database programming significantly quicker and easier than save a few milliseconds per message.

the commit

Update attachment
Earned sticker

Just found out I can use struct tags to generate SQL based on structs, and am currently adding this to the small bit of existing database code, because it'll save me so much time writing SQL later.

Update attachment
Earned sticker

Did a little today, just finished up writing the code to insert all reference data values into the DB, and to delete all old values when a new reference is imported.
Next is to 'interpret' the unmarshalled XML into DB rows.
see the commit

Update attachment
Earned sticker

Did a lot more today, as it was a weekend.
I mostly worked on understanding train loading categories, and creating tables for the x via y text into the database (more annoying than you'd think - the via list in XML MUST be processed from top to bottom to find the first match, and there's nothing good to use as a primary key so I have to generate it automatically).

It's still work-in-progress, so it currently doesn't compile, but I'm feeling good about this, it's going well.

Oh and also the database should be able to deal with timezone stuff now, 'cause I get data in BST.
At least hopefully it can - I haven't tested it yet.

see the commit on GitHub

Update attachment
Earned sticker

I only worked a little today, but I began work on interpreting the timetable files I can now unmarshal, which I intend to process into rows in the schedule table.

Update attachment
Earned sticker

Today, I read through the schema for timetable data, and implemented an unmarshaller for it. It went quite quickly, as timetables are very similar to schedules, so I could mostly copy schedule code and adjust it slightly to match the timetable schema.

Next TODO is to write the interpreter code for all of this.

Update attachment
Earned sticker

I didn't do much work today, but I set up a little bit of framework to pull in timetable data as well as reference data, so I can grab the next ~48hrs of timetable immediately on startup, instead of only being able to show trains that have been activated.

Update attachment
Earned sticker

Today, I set up the reference data download to run automatically when a specific message is received, and set up a download of the latest reference data to run when the program is started up, to avoid issues with foreign key constraints.
I also wrote all of the SQL to put this data into the database!
Finally, I can get back on with interpreting the rest of the data I have, and writing that query engine, before it's too late!

Update attachment
Earned sticker

Today, I added the ability to download reference data from AWS S3. Rail Data Marketplace got back to me, and they say they've referred it to an engineer, so hopefully it gets fixed. The refactor that I've just spent most of my time on should allow the SFTP stuff to be a drop-in replacement/addition, as I have abstracted some components to be interfaces.

I haven't had the chance to test this yet, but i'm about to - the next step is to create an interpreter function and a repository so that I can store the data into the database, and finally get back on with the most important part - the query engine. I want to get this thing finished for the deadline, which is approaching scarily fast considering I haven't even got a frontend yet.
I can probably slap something together with some basic HTML to do simple queries, sadly I've spent so long implementing as much as I can from the specification and making all the code neat and tidy that I've neglected the actual end goal, which is to display the damn train times!!

this is the git difference, it's mostly changes due to renamed variables and packages :P

Update attachment
Earned sticker

Today, I decided to use the National Rail Data Portal as my source for reference data, since the Rail Data Marketplace support hasn't gotten back to me yet, and the SFTP transfers still don't work (to be fair, it is has been a weekend, so they'll probably get back to me tomorrow). This means I will be grabbing data from Amazon's AWS S3 service, which is much easier then writing an SFTP server!
Unfortunately, the NRDP is apparently going to get shut down early 2026, so I hope the SFTP stuff gets fixed by then.
Oh also, I have moved from having one database per datasource to a shared database for all data, with SQL statements and function split per datasource. This will hopefully reduce the complexity of setting up the service, but may make adding more datasources slightly more difficult-er later.

Update attachment

Hello again, Apparently explorpheus has stopped sending 10hour notifications, so I forgot to devlog in time again :nooo:.
Since the last log:
- I've added a NixOS service to make the program easier to selfhost, by automatically configuring a user for the service and setting up postgresql.
- Support receiving file transfers from the Rail Data Marketplace to load in Reference data over SFTP! (unfortunately, the RDM is broken, and SFTP transfers out don't actually even work, so that was a waste of a lot of time. Support might get back to me hopefully, but I doubt they'll be able to fix the issue.)
See the full diff on github

Update attachment

I just had a great bit of focused coding time, and now I've got an SFTP server set up for receiving timetable and reference files from the Rail Data Marketplace! (and a Nix flake to build the project easily). Sadly, I went about 2 hours over the time I get tracked for, so I lost a little time, but I think it was worth it for the focus anyway.
Hopefully I can set up the darwin message processor to read the files that get SFTP'd over for the next devlog, then I'll be getting a just a little closer to being done with the backend. (there's still so much more time this will take though, I thought the finish line was in sight, but it feels like - at least if I want to make something high quality and well written - there's still at least another 100 hours more work TODO. I'll probably have to cut this project short a little and get a 'beta' version shipped out in time before the summer of making ends.)

Update attachment

Since the last devlog, I've added unit tests for the schedule interpreter, set up 2 new tables to keep track of messages+message responses, significantly improved how logs are used, fixed how XHTML station message data is decoded to be consistent, and most importantly fixed a bug in the schedules time interpreter to allow train times to cross midnight inside of a location (e.g. arrival time 23:59, departure 00:01).
See the full diff on GitHub.

Update attachment

I've finally figured out a consistent style to apply across my codebase, which means I think that this has been the final refactor I'll need to do! (I hope, this took ages!). Anyway, I haven't made a single 'large' change really, just hundreds of little details being tidied up around the code - renaming packages, variables and types to make their purpose clearer, removing the use of pointers where possible to reduce the chance of a nil pointer panic, and adding a lot more logs to make tracing how railreader processes a message significantly easier.

The next thing I plan to work on is to write tests for the schedule code, then plan out the next tables to add to the database for the other message types.

Update attachment

Since the last devlog, I haven't really progressed the actual functionality of the railreader backend much, but I have refactored almost all of the codebase to allow (for example) the code that interprets the content of a message to be independent of the code that writes the updates to a database, which will make it significantly easier to write tests! I learnt about a pattern called unit of work (all work takes place in a transaction, which can either fully fail or fully commit) and the repository pattern (database read/write actions are independent from the database itself), which both work really well as a basis for how I can design my codebase, so I re-wrote a few areas to follow those patterns.

If markdown works here, you can view my changes on GitHub.

Update attachment

I've got all types of locations reporting all of their times correctly now! Unfortunately, times crossing midnight does not record the new date correctly, despite tests that show it should otherwise. At the moment I'm halfway through a refactor of the project structure to try and make it clearer how the code works, as I have a lot of functions with stupid numbers of arguments, and weird classes that don't really get used sensibly, and I also want the ability to create 1 DB transaction per message to avoid issues with the DB only having half of a message applied.

TLDR: nothing works properly, and the code needs a good refactor, so that's what I'm working on right now.

I've attached the incorrect dates that I was talking about, the date of the schedule entries near the bottom should be the 17th, not the 16th.

Update attachment

I've been picking up speed now! Most of the difficult stuff in terms of database-ing the most important bit (the train times) is out of the way. I've only got Origin locations updating their arrival/departure times correctly now, so next I'll work on getting other types of location to provide correct times too!

Update attachment

Since the last log, I have gotten stuff going into the database now! I set up SQL INSERT and also can now interpret train times into actual [time.Time]s (difficult because times can cross midnight). I have also written test cases for this too. I also started work on interpreting the reference data that provides info about train stations and train operating companies. Finally, I did tons of organization work to sort different parts of the project into neat folders so I can scope my efforts better.

Update attachment

Still a work-in-progress, but I have broken up a lot of the big hard-to-understand functions into more manageable smaller functions, and split up functions with different purposes into different files to make future organization easier. I have now got my first INSERT statement (almost) working!!! I'm close to getting the first bits of real data into the database, and it's pretty exciting.

Update attachment

Been a while since I wrote a devlog, took ~2 weeks off to break over summer, and to work on some of my other projects. Since I last wrote: I fixed HTML decoding for alerts and messages to be compatible with encoded special characters, like  . I also started writing the SQL for my database, and setting up a connection to PostgreSQL!

Update attachment

I think I'm ready to ship! I added a pride-coloured background behind the article view, and also made it get brighter in a circle around your cursor, for interactivity. Sadly, the cursor thing doesn't work on Internet Explorer, but everything else does! I also fixed an issue where the article view would extend beyond 100% of view space on mobile.

Update attachment

Since the last devlog, I went through all features of my website and got them fully working on IE11! I had to change a javascript for loop to use older syntax, and I got rid of the CSS variables. BUT it looks beautiful and behaves perfectly on every browser now!

Update attachment

Since the last devlog, I have added my face to the front page, and used a flexbox to put it side by side or above/below the introduction section.
Still need to test how this appears on IE.

Update attachment

Started logging the time that I spend on my website! So far, I have written an introductory paragraph, and summarized how I got started coding. I have also improved compatibility of the site with all browsers, by removing the use of flexbox styles - it renders perfectly even on Internet Explorer 11! I also set the font to default to the system-ui font of the OS that people are running to significantly reduce load times by not requiring a font download for the page to render.

Update attachment
Edward Hesketh
Edward Hesketh created a project
50d ago

edwardh.dev

My (IE11 compatible) personal homepage written from scratch in HTML. It looks good in light or dark mode, on mobile or desktop, and even when printed. Written without the use of AI.

edwardh.dev
4 devlogs 0 followers Shipped

I've done it! All possible Darwin messages can now be fully decoded into structures, with all correct defaults set, and it's all fully tested too!

I'll probably rest for now, but next TODO is to interpret all of the data into useful values (eg convert the string 12:02:30 into a time+date value - more difficult than you think when trains cross midnight boundaries!)

Update attachment

Since my last devlog, I have finished fully decoding all possible Darwin messages into go structs, and have added test cases for most unmarshal functions.

https://github.com/headblockhead/railreader/compare/9a6d7adc303644429add2d89d298fecbacafe399..b043404bf959fc8fcf57efbe8b5c9a030920c854

Update attachment

Since the last devlog, I have added in string descriptions of almost every activity, and fully enumerated and string-ed train types too. I also wrote out all of the structures for train formation data, train association data, and train forecast data.

Phew!

I think I'm about half way to getting every possible message unmarsalled into usable structs. Once I'm done with this bit, I can start interpreting the data somewhat, just to get everything into a database.

I started using a program called XMLSpy to graphically draw out the XML definitions into a printable graph, and it works amazing! I only get a 30 day free trial though. Attached is a picture of what a printed out XMLSpy tree looks like, for just the train forecast data.

Update attachment

This isn't tracked by hackatime, but I spent quite a few hours on a banner design for the project. I attached the PNG version to this log (real one is SVG).

Update attachment

Turns out that the document that lists all activity codes etc. v29 from 2014, is actually still the latest version! This is both disappointing and exciting - disappointing because there is no better documentation than the existing stuff, and exciting because I can just use v29 without having to worry about waiting for a reply to my email.

This does unfortunately mean I have probably sent an email that has already been sent thousands of times before by hundreds of others since 2014, and will get no answer. Oh well!

https://groups.google.com/g/openraildata-talk/c/J7L2rnvpMhc/m/06H_nACeAgAJ

Since my last devlog, I have separated out the unmarshalling code even more, added every possible activity code in an enum, and started re-marshalling data back into XML too, so it can be compared to the original!

Update attachment

I have spent a LONG time reading through as much train/rail documentation I can find about Darwin and TRUST, from Network Rail and National Rail. Attached is one of the documents I have read through, but I have read a lot of others, quite a lot of data is hidden in the XML specification! Currently waiting on response from Network Rail via email to get the latest copy of the document that lists every Activity Code, Status, and others. This API takes a LOT more time reading docs to figure out than writing code to make it work.

My goal for finishing the API component of my app that decodes the train data is to have code that can be read and understood even with minimal knowledge of the internals of the UK's train system, because all of the useful API information is very spread out across many many sources, so hopefully my repo can be a complete code example for using and interpreting train data.

Most recently, I have fully fleshed out the ScheduleInformation type to include all possible train schedule location types, decoding into a generic type for all locations, then specializing based on the name of the XML element.

I have setup default values for the ScheduleInformation structure by implementing a custom XML parsing function that fills in defaults automatically. I have also started parsing locations from the schedule.

TODO is to have separate types for every location type.

Update attachment

I have switched from using the JSON feed to using the XML feed, as the XML is the original format, and is better documented. I have also spent lots of time researching all of the different elements of the train data, so I can provide comments for documentation! So far, I have schedule data as a structure, but TODO is all of the other possible update types.

Update attachment

First update about 'Rail Regard'! (working title)
So far I have:
- Gotten (free) access to National Rail's Darwin data stream from raildata.org.uk
- Read and annotated the entirety of the Darwin Interface Specification to understand the crazy amount of data available.
- Written the start of my API, a Go program that (so far) subscribes to the API, and asynchronously unmarshales some of it into usable structures!

Update attachment
Edward Hesketh
Edward Hesketh created a project
80d ago

Rail Regard

A UK train app that shows as much detail as possible about the train you are taking.

Rail Regard
32 devlogs 1 follower
Edward Hesketh
Edward Hesketh joined Summer of Making
102d ago

This was widely regarded as a great move by everyone.