June 22, 2025
Tried rewriting the calculations in rust, so it's more efficient and faster - also to support larger amount of numbers and sizes. Tried to make a python module with my rust code and embedd it into the python code, but it's not fully working yet.
Updated some stuff, fixed the axis symmetry, also working on analyzing the properties of numbers in their PFS geometric visualisation, in the picture you can see the number 6469693230 (sadly it's still in a sort of fibonacci way, it should be a regular polygon with k corners (k being the amount of different prime factors/axis)
Tried to implement the key features, including the visualization in python (web interface coded in python) and also embedding many axis in a three dimensional coordinate system, doesn't completely work yet, only some basic stuff, you can see lines and sometimes triangles, but it's still buggy, also the coordinates aren't showing correctly, as the axis and with that the points in e.g. 17 dimensional space have the coordinates when mapped into a regular three dimensional space, I will have to think about how to transform this correctly!
This project visualizes natural numbers as geometric figures in a high-dimensional space defined by their prime factor exponents. Each prime number corresponds to an axis arranged radially in a circle, and a number's prime exponents determine its coordinates along these axes. Stacking these polygons vertically represents multiple numbers, creating a 3D prime-factor geometry. This novel visualization aids intuitive understanding of prime factorization structures and explores new geometric insights in number theory. It combines algebraic concepts with interactive 3D graphics for educational and research applications.
just created the project and tried defining some really basic concepts etc, sadly I got some weird bugs with using enums from other .rs files...
I am building a solver for proof-based geometric problems in rust.
Those are my final results, completed everything I wanted to do, my algorithm for gcd and lcm is slightly faster than binary gcd/lcm (the most used one atm), got nice graphs and everything but the benchmark for memory usage of the algorithms is working (but I am just unable to fix it atm and it's not that important for the project)
optimized algorithm, added a mix of standard calculations and pure cases, added stuff to gui, tried to add memory - it still has some issues tho
Wrote the whole algorithms in Rust, implemented binary gcd and lcm algorithms to compare, tried optimizing them a lot, could get my algorithm equal or faster until 12 bits of size. Also made a little GUI, to graph the results, the scale is from bottom to top in time (nanoseconds - ns) and from left to right in bits (4-32)
I changed some stuff, did some optimisations for the algorithms, implemented them in Rust, as python isn't that reliable for runtime tests, memory, etc. Tried to implement SIMD for the vector additions, implemented gcd and lcm for smaller than 32-bit numbers, ran some tests to check the performance - which was surprisingly good! cant push it to github rn tho, gonna do that tmrw, as im having some issues
I had a look at my previous work I did in python and tried to figure out, why the measurements were weird, it was because my factorization algorithm wasn't implemented correctly. The python version was just for basic testing, now as I'm doing it as an actual project I am going to switch to Rust, to have a better control over my memory, etc.
In the screenshot you can see a part of the code that I reviewed, it was on matrix multiplication with my method.
I’m building a high-performance arithmetic system in Rust that rethinks number representation. By encoding integers as prime exponent vectors, it converts: Multiplication → Vector addition GCD/LCM → Min/Max operations Matrix math → Optimized vector transforms The core innovation exploits prime factorization’s mathematical properties. Modern hardware outperforms this approach for general cases of multiplication due to extreme optimization of traditional arithmetic. The key bottleneck is factorization latency (why RSA works). Today, it’s not viable for multiplication, but one can easily figure out the most efficient cases for it and combine the binary GCD algorithm with the PFS (Prime Factor System) algorithm and then we get an optimized version of the binary GCD and LCM algorithms (the second one is heavily based on GCD). Future quantum integration could revolutionize multiplication too: Shor’s algorithm would enable efficient factorization Practical for huge numbers (n > 2^25 bits based on my calculations) Turns prime vectors into an universal accelerator for many operations
This was widely regarded as a great move by everyone.