This paper presents an algorithm for the estimation of the Surface Light Field using video sequences acquired moving the camera around the object. Unlike other state of the art methods, it does not require a uniform sampling density of the view directions, but it is able to build an approximation of the Surface Light Field starting from a biased video acquisition: dense along the camera path and completely missing in the other directions. The main idea is to separate the estimation of two components: the diffuse color, computed using statistical operations that allow the estimation of a rough approximation of the direction of the main light sources in the acquisition environment; the other residual Surface Light Field effects, modeled as linear combination of spherical functions. From qualitative and numerical evaluations, the final rendering results show a high fidelity and similarity with the input video frames, without ringing and banding effects.