Introduction/Purpose: Traditionally, foot and ankle alignment is quantified on 2D standing radiographs. This imagingmethod present limitations in capturing the 3D nature of our foot and ankle alignment and the obtained manualmeasurements are prone to observer error. This study aims to integrate weight-bearing CT (WBCT) imaging andadvanced deep learning techniques to automate and enhance quantification our 3D foot and ankle alignment.Methods: Thirty-two patients who underwent a WBCT of the foot and ankle were retrospectively included. Training andvalidation of a 3D nnU-Net model on 45 cases to automate the segmentation into bony models was assessed byHausdorff distance (measures how far two 3D bone model scans are from each other) and dice coefficient (measuresthe similarity between 3D bone model scans). Afterwards, 35 clinically relevant 3D measurements were automaticallycomputed using a custom-made tool. Automated measurements were assessed for accuracy against manualmeasurements, while the latter were analyzed for inter-observer reliability.Results: Deep learning segmentation results showed a mean Hausdorff distance of 1.41mm and a mean dice coefficientof 0.95. A good to excellent reliability and mean prediction error of under 2 degrees was found for all angles except thetalonavicular coverage angle and distal metatarsal articular angle, which were under 2.5 degrees.Conclusion: In summary, this study introduces a fully automated framework (Fig. 1) to quantify foot and ankle alignmentand showcases reliability comparable to current clinical practice measurements. This operator-friendly and time-efficienttool holds promise for implementation in a clinical setting, benefitting both radiologists and surgeons. Future studies areencouraged to assess the tool’s impact on streamlining image assessment workflows in clinical practice.