Neural radiance representations are increasingly popular in performance capture and virtual production for their cost-effectiveness and photorealistic quality. However, there remains a significant gap between the radiance field rendering and physically-based rendering (PBR) approaches in their ability to support post-production lighting adjustments. Notably, existing neural radiance field-based approaches often rely on global illumination that does not accurately capture lighting conditions in controlled studio environments. In this paper, we introduce a novel physically-based rendering (PBR) framework designed for relighting dynamic volumetric scenes. Our approach models lighting conditions using learnable spotlights and employs deformable Gaussian representations for accurate reflectance layer decomposition and precise lighting control. We integrate learnable light transport to efficiently compute visibility and occlusion with learned lighting and material properties. Extensive experiments show that RelightableStudio achieves state-of-the-art performance in material and lighting decomposition, relighting fidelity, and novel view synthesis. This framework offers a compact, end-to-end, and editable representation, optimized for low-cost, high-quality video post-production in studio environments.